00:00:00.001 Started by upstream project "autotest-per-patch" build number 130589 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.114 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:52.121 The recommended git tool is: git 00:00:52.121 using credential 00000000-0000-0000-0000-000000000002 00:00:52.123 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:52.135 Fetching changes from the remote Git repository 00:00:52.137 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:52.149 Using shallow fetch with depth 1 00:00:52.149 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:52.149 > git --version # timeout=10 00:00:52.161 > git --version # 'git version 2.39.2' 00:00:52.161 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:52.175 Setting http proxy: proxy-dmz.intel.com:911 00:00:52.175 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:01:07.798 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:01:07.817 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:01:07.834 Checking out Revision d37d6e8a0abef39b377a5f0531b43b2efbbebf34 (FETCH_HEAD) 00:01:07.834 > git config core.sparsecheckout # timeout=10 00:01:07.847 > git read-tree -mu HEAD # timeout=10 00:01:07.867 > git checkout -f d37d6e8a0abef39b377a5f0531b43b2efbbebf34 # timeout=5 00:01:07.887 Commit message: "pool: serialize build page context to json" 00:01:07.888 > git rev-list --no-walk d37d6e8a0abef39b377a5f0531b43b2efbbebf34 # timeout=10 00:01:08.001 [Pipeline] Start of Pipeline 00:01:08.021 [Pipeline] library 00:01:08.023 Loading library shm_lib@master 00:01:08.023 Library shm_lib@master is cached. Copying from home. 00:01:08.043 [Pipeline] node 00:01:23.046 Still waiting to schedule task 00:01:23.047 Waiting for next available executor on ‘vagrant-vm-host’ 00:22:08.861 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-vg-autotest_3 00:22:08.863 [Pipeline] { 00:22:08.875 [Pipeline] catchError 00:22:08.877 [Pipeline] { 00:22:08.896 [Pipeline] wrap 00:22:08.906 [Pipeline] { 00:22:08.917 [Pipeline] stage 00:22:08.919 [Pipeline] { (Prologue) 00:22:08.940 [Pipeline] echo 00:22:08.942 Node: VM-host-SM9 00:22:08.950 [Pipeline] cleanWs 00:22:08.960 [WS-CLEANUP] Deleting project workspace... 00:22:08.960 [WS-CLEANUP] Deferred wipeout is used... 00:22:08.966 [WS-CLEANUP] done 00:22:09.153 [Pipeline] setCustomBuildProperty 00:22:09.247 [Pipeline] httpRequest 00:22:09.583 [Pipeline] echo 00:22:09.585 Sorcerer 10.211.164.101 is alive 00:22:09.595 [Pipeline] retry 00:22:09.597 [Pipeline] { 00:22:09.613 [Pipeline] httpRequest 00:22:09.617 HttpMethod: GET 00:22:09.618 URL: http://10.211.164.101/packages/jbp_d37d6e8a0abef39b377a5f0531b43b2efbbebf34.tar.gz 00:22:09.618 Sending request to url: http://10.211.164.101/packages/jbp_d37d6e8a0abef39b377a5f0531b43b2efbbebf34.tar.gz 00:22:09.619 Response Code: HTTP/1.1 200 OK 00:22:09.619 Success: Status code 200 is in the accepted range: 200,404 00:22:09.620 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_3/jbp_d37d6e8a0abef39b377a5f0531b43b2efbbebf34.tar.gz 00:22:09.766 [Pipeline] } 00:22:09.786 [Pipeline] // retry 00:22:09.796 [Pipeline] sh 00:22:10.075 + tar --no-same-owner -xf jbp_d37d6e8a0abef39b377a5f0531b43b2efbbebf34.tar.gz 00:22:10.090 [Pipeline] httpRequest 00:22:10.487 [Pipeline] echo 00:22:10.488 Sorcerer 10.211.164.101 is alive 00:22:10.497 [Pipeline] retry 00:22:10.499 [Pipeline] { 00:22:10.513 [Pipeline] httpRequest 00:22:10.518 HttpMethod: GET 00:22:10.518 URL: http://10.211.164.101/packages/spdk_1b1c3081e7433ef3ee5ea712b81b554bbbca8f0a.tar.gz 00:22:10.519 Sending request to url: http://10.211.164.101/packages/spdk_1b1c3081e7433ef3ee5ea712b81b554bbbca8f0a.tar.gz 00:22:10.520 Response Code: HTTP/1.1 200 OK 00:22:10.521 Success: Status code 200 is in the accepted range: 200,404 00:22:10.521 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_3/spdk_1b1c3081e7433ef3ee5ea712b81b554bbbca8f0a.tar.gz 00:22:14.370 [Pipeline] } 00:22:14.388 [Pipeline] // retry 00:22:14.396 [Pipeline] sh 00:22:14.676 + tar --no-same-owner -xf spdk_1b1c3081e7433ef3ee5ea712b81b554bbbca8f0a.tar.gz 00:22:17.994 [Pipeline] sh 00:22:18.274 + git -C spdk log --oneline -n5 00:22:18.274 1b1c3081e bdev: explicitly inline bdev_channel_get_io() 00:22:18.274 165425556 bdev/passthru: add bdev_io_stack support 00:22:18.274 4f975f22c [TEST] bdev: save stack_frame instead of bdev_io in io_submitted TAILQ 00:22:18.274 392076696 bdev: Add spdk_bdev_io_submit API 00:22:18.274 ef2413376 bdev: Add spdk_bdev_io_to_ctx 00:22:18.295 [Pipeline] writeFile 00:22:18.311 [Pipeline] sh 00:22:18.592 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:22:18.605 [Pipeline] sh 00:22:18.886 + cat autorun-spdk.conf 00:22:18.886 SPDK_RUN_FUNCTIONAL_TEST=1 00:22:18.886 SPDK_TEST_NVMF=1 00:22:18.886 SPDK_TEST_NVMF_TRANSPORT=tcp 00:22:18.886 SPDK_TEST_USDT=1 00:22:18.886 SPDK_TEST_NVMF_MDNS=1 00:22:18.886 SPDK_RUN_UBSAN=1 00:22:18.886 NET_TYPE=virt 00:22:18.886 SPDK_JSONRPC_GO_CLIENT=1 00:22:18.886 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:22:18.893 RUN_NIGHTLY=0 00:22:18.895 [Pipeline] } 00:22:18.912 [Pipeline] // stage 00:22:18.933 [Pipeline] stage 00:22:18.937 [Pipeline] { (Run VM) 00:22:18.953 [Pipeline] sh 00:22:19.237 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:22:19.237 + echo 'Start stage prepare_nvme.sh' 00:22:19.237 Start stage prepare_nvme.sh 00:22:19.237 + [[ -n 1 ]] 00:22:19.237 + disk_prefix=ex1 00:22:19.237 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest_3 ]] 00:22:19.237 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest_3/autorun-spdk.conf ]] 00:22:19.237 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest_3/autorun-spdk.conf 00:22:19.237 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:22:19.237 ++ SPDK_TEST_NVMF=1 00:22:19.237 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:22:19.237 ++ SPDK_TEST_USDT=1 00:22:19.237 ++ SPDK_TEST_NVMF_MDNS=1 00:22:19.237 ++ SPDK_RUN_UBSAN=1 00:22:19.237 ++ NET_TYPE=virt 00:22:19.237 ++ SPDK_JSONRPC_GO_CLIENT=1 00:22:19.237 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:22:19.237 ++ RUN_NIGHTLY=0 00:22:19.237 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest_3 00:22:19.237 + nvme_files=() 00:22:19.237 + declare -A nvme_files 00:22:19.237 + backend_dir=/var/lib/libvirt/images/backends 00:22:19.237 + nvme_files['nvme.img']=5G 00:22:19.237 + nvme_files['nvme-cmb.img']=5G 00:22:19.237 + nvme_files['nvme-multi0.img']=4G 00:22:19.237 + nvme_files['nvme-multi1.img']=4G 00:22:19.237 + nvme_files['nvme-multi2.img']=4G 00:22:19.237 + nvme_files['nvme-openstack.img']=8G 00:22:19.237 + nvme_files['nvme-zns.img']=5G 00:22:19.237 + (( SPDK_TEST_NVME_PMR == 1 )) 00:22:19.237 + (( SPDK_TEST_FTL == 1 )) 00:22:19.237 + (( SPDK_TEST_NVME_FDP == 1 )) 00:22:19.238 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:22:19.238 + for nvme in "${!nvme_files[@]}" 00:22:19.238 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:22:19.238 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:22:19.238 + for nvme in "${!nvme_files[@]}" 00:22:19.238 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:22:19.238 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:22:19.238 + for nvme in "${!nvme_files[@]}" 00:22:19.238 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:22:19.238 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:22:19.238 + for nvme in "${!nvme_files[@]}" 00:22:19.238 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:22:19.238 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:22:19.238 + for nvme in "${!nvme_files[@]}" 00:22:19.238 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:22:19.238 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:22:19.238 + for nvme in "${!nvme_files[@]}" 00:22:19.238 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:22:19.238 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:22:19.238 + for nvme in "${!nvme_files[@]}" 00:22:19.238 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:22:19.806 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:22:19.806 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:22:19.806 + echo 'End stage prepare_nvme.sh' 00:22:19.806 End stage prepare_nvme.sh 00:22:19.817 [Pipeline] sh 00:22:20.097 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:22:20.097 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:22:20.097 00:22:20.097 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_3/spdk/scripts/vagrant 00:22:20.097 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_3/spdk 00:22:20.097 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest_3 00:22:20.097 HELP=0 00:22:20.097 DRY_RUN=0 00:22:20.097 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:22:20.097 NVME_DISKS_TYPE=nvme,nvme, 00:22:20.097 NVME_AUTO_CREATE=0 00:22:20.097 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:22:20.097 NVME_CMB=,, 00:22:20.097 NVME_PMR=,, 00:22:20.097 NVME_ZNS=,, 00:22:20.097 NVME_MS=,, 00:22:20.097 NVME_FDP=,, 00:22:20.097 SPDK_VAGRANT_DISTRO=fedora39 00:22:20.097 SPDK_VAGRANT_VMCPU=10 00:22:20.097 SPDK_VAGRANT_VMRAM=12288 00:22:20.097 SPDK_VAGRANT_PROVIDER=libvirt 00:22:20.097 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:22:20.097 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:22:20.097 SPDK_OPENSTACK_NETWORK=0 00:22:20.097 VAGRANT_PACKAGE_BOX=0 00:22:20.097 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest_3/spdk/scripts/vagrant/Vagrantfile 00:22:20.097 FORCE_DISTRO=true 00:22:20.097 VAGRANT_BOX_VERSION= 00:22:20.097 EXTRA_VAGRANTFILES= 00:22:20.097 NIC_MODEL=e1000 00:22:20.097 00:22:20.097 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest_3/fedora39-libvirt' 00:22:20.097 /var/jenkins/workspace/nvmf-tcp-vg-autotest_3/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest_3 00:22:23.383 Bringing machine 'default' up with 'libvirt' provider... 00:22:24.316 ==> default: Creating image (snapshot of base box volume). 00:22:24.316 ==> default: Creating domain with the following settings... 00:22:24.316 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1727814079_a1e73f0f90d05ffae2cc 00:22:24.316 ==> default: -- Domain type: kvm 00:22:24.316 ==> default: -- Cpus: 10 00:22:24.316 ==> default: -- Feature: acpi 00:22:24.316 ==> default: -- Feature: apic 00:22:24.316 ==> default: -- Feature: pae 00:22:24.316 ==> default: -- Memory: 12288M 00:22:24.316 ==> default: -- Memory Backing: hugepages: 00:22:24.316 ==> default: -- Management MAC: 00:22:24.316 ==> default: -- Loader: 00:22:24.316 ==> default: -- Nvram: 00:22:24.316 ==> default: -- Base box: spdk/fedora39 00:22:24.316 ==> default: -- Storage pool: default 00:22:24.316 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1727814079_a1e73f0f90d05ffae2cc.img (20G) 00:22:24.316 ==> default: -- Volume Cache: default 00:22:24.316 ==> default: -- Kernel: 00:22:24.316 ==> default: -- Initrd: 00:22:24.316 ==> default: -- Graphics Type: vnc 00:22:24.316 ==> default: -- Graphics Port: -1 00:22:24.316 ==> default: -- Graphics IP: 127.0.0.1 00:22:24.316 ==> default: -- Graphics Password: Not defined 00:22:24.316 ==> default: -- Video Type: cirrus 00:22:24.316 ==> default: -- Video VRAM: 9216 00:22:24.316 ==> default: -- Sound Type: 00:22:24.316 ==> default: -- Keymap: en-us 00:22:24.316 ==> default: -- TPM Path: 00:22:24.316 ==> default: -- INPUT: type=mouse, bus=ps2 00:22:24.316 ==> default: -- Command line args: 00:22:24.316 ==> default: -> value=-device, 00:22:24.316 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:22:24.316 ==> default: -> value=-drive, 00:22:24.316 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:22:24.316 ==> default: -> value=-device, 00:22:24.317 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:22:24.317 ==> default: -> value=-device, 00:22:24.317 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:22:24.317 ==> default: -> value=-drive, 00:22:24.317 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:22:24.317 ==> default: -> value=-device, 00:22:24.317 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:22:24.317 ==> default: -> value=-drive, 00:22:24.317 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:22:24.317 ==> default: -> value=-device, 00:22:24.317 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:22:24.317 ==> default: -> value=-drive, 00:22:24.317 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:22:24.317 ==> default: -> value=-device, 00:22:24.317 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:22:24.317 ==> default: Creating shared folders metadata... 00:22:24.317 ==> default: Starting domain. 00:22:25.696 ==> default: Waiting for domain to get an IP address... 00:22:43.776 ==> default: Waiting for SSH to become available... 00:22:43.776 ==> default: Configuring and enabling network interfaces... 00:22:45.674 default: SSH address: 192.168.121.126:22 00:22:45.674 default: SSH username: vagrant 00:22:45.674 default: SSH auth method: private key 00:22:48.203 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_3/spdk/ => /home/vagrant/spdk_repo/spdk 00:22:56.316 ==> default: Mounting SSHFS shared folder... 00:22:56.883 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_3/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:22:56.883 ==> default: Checking Mount.. 00:22:58.259 ==> default: Folder Successfully Mounted! 00:22:58.259 ==> default: Running provisioner: file... 00:22:58.829 default: ~/.gitconfig => .gitconfig 00:22:59.395 00:22:59.395 SUCCESS! 00:22:59.395 00:22:59.395 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest_3/fedora39-libvirt and type "vagrant ssh" to use. 00:22:59.395 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:22:59.395 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest_3/fedora39-libvirt" to destroy all trace of vm. 00:22:59.395 00:22:59.404 [Pipeline] } 00:22:59.422 [Pipeline] // stage 00:22:59.432 [Pipeline] dir 00:22:59.433 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest_3/fedora39-libvirt 00:22:59.434 [Pipeline] { 00:22:59.447 [Pipeline] catchError 00:22:59.448 [Pipeline] { 00:22:59.461 [Pipeline] sh 00:22:59.733 + vagrant ssh-config --host vagrant 00:22:59.734 + sed -ne /^Host/,$p 00:22:59.734 + tee ssh_conf 00:23:03.917 Host vagrant 00:23:03.917 HostName 192.168.121.126 00:23:03.917 User vagrant 00:23:03.917 Port 22 00:23:03.917 UserKnownHostsFile /dev/null 00:23:03.917 StrictHostKeyChecking no 00:23:03.917 PasswordAuthentication no 00:23:03.917 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:23:03.917 IdentitiesOnly yes 00:23:03.917 LogLevel FATAL 00:23:03.917 ForwardAgent yes 00:23:03.917 ForwardX11 yes 00:23:03.917 00:23:03.930 [Pipeline] withEnv 00:23:03.932 [Pipeline] { 00:23:03.949 [Pipeline] sh 00:23:04.237 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:23:04.237 source /etc/os-release 00:23:04.237 [[ -e /image.version ]] && img=$(< /image.version) 00:23:04.238 # Minimal, systemd-like check. 00:23:04.238 if [[ -e /.dockerenv ]]; then 00:23:04.238 # Clear garbage from the node's name: 00:23:04.238 # agt-er_autotest_547-896 -> autotest_547-896 00:23:04.238 # $HOSTNAME is the actual container id 00:23:04.238 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:23:04.238 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:23:04.238 # We can assume this is a mount from a host where container is running, 00:23:04.238 # so fetch its hostname to easily identify the target swarm worker. 00:23:04.238 container="$(< /etc/hostname) ($agent)" 00:23:04.238 else 00:23:04.238 # Fallback 00:23:04.238 container=$agent 00:23:04.238 fi 00:23:04.238 fi 00:23:04.238 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:23:04.238 00:23:04.248 [Pipeline] } 00:23:04.262 [Pipeline] // withEnv 00:23:04.270 [Pipeline] setCustomBuildProperty 00:23:04.283 [Pipeline] stage 00:23:04.284 [Pipeline] { (Tests) 00:23:04.300 [Pipeline] sh 00:23:04.577 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:23:04.589 [Pipeline] sh 00:23:04.866 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:23:04.885 [Pipeline] timeout 00:23:04.885 Timeout set to expire in 1 hr 0 min 00:23:04.887 [Pipeline] { 00:23:04.901 [Pipeline] sh 00:23:05.182 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:23:05.749 HEAD is now at 1b1c3081e bdev: explicitly inline bdev_channel_get_io() 00:23:05.759 [Pipeline] sh 00:23:06.037 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:23:06.309 [Pipeline] sh 00:23:06.588 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_3/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:23:06.605 [Pipeline] sh 00:23:06.889 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:23:06.889 ++ readlink -f spdk_repo 00:23:06.889 + DIR_ROOT=/home/vagrant/spdk_repo 00:23:06.889 + [[ -n /home/vagrant/spdk_repo ]] 00:23:06.889 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:23:06.889 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:23:06.889 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:23:06.889 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:23:06.889 + [[ -d /home/vagrant/spdk_repo/output ]] 00:23:06.889 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:23:06.889 + cd /home/vagrant/spdk_repo 00:23:06.889 + source /etc/os-release 00:23:06.889 ++ NAME='Fedora Linux' 00:23:06.889 ++ VERSION='39 (Cloud Edition)' 00:23:06.889 ++ ID=fedora 00:23:06.889 ++ VERSION_ID=39 00:23:06.889 ++ VERSION_CODENAME= 00:23:06.889 ++ PLATFORM_ID=platform:f39 00:23:06.889 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:23:06.889 ++ ANSI_COLOR='0;38;2;60;110;180' 00:23:06.889 ++ LOGO=fedora-logo-icon 00:23:06.889 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:23:06.889 ++ HOME_URL=https://fedoraproject.org/ 00:23:06.889 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:23:06.889 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:23:06.889 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:23:06.889 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:23:06.889 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:23:06.889 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:23:06.889 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:23:06.889 ++ SUPPORT_END=2024-11-12 00:23:06.889 ++ VARIANT='Cloud Edition' 00:23:06.889 ++ VARIANT_ID=cloud 00:23:06.889 + uname -a 00:23:06.889 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:23:06.889 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:23:07.455 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:07.455 Hugepages 00:23:07.455 node hugesize free / total 00:23:07.455 node0 1048576kB 0 / 0 00:23:07.455 node0 2048kB 0 / 0 00:23:07.455 00:23:07.455 Type BDF Vendor Device NUMA Driver Device Block devices 00:23:07.455 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:23:07.455 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:23:07.455 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:23:07.455 + rm -f /tmp/spdk-ld-path 00:23:07.455 + source autorun-spdk.conf 00:23:07.455 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:23:07.455 ++ SPDK_TEST_NVMF=1 00:23:07.455 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:23:07.455 ++ SPDK_TEST_USDT=1 00:23:07.455 ++ SPDK_TEST_NVMF_MDNS=1 00:23:07.455 ++ SPDK_RUN_UBSAN=1 00:23:07.455 ++ NET_TYPE=virt 00:23:07.455 ++ SPDK_JSONRPC_GO_CLIENT=1 00:23:07.455 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:23:07.455 ++ RUN_NIGHTLY=0 00:23:07.455 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:23:07.455 + [[ -n '' ]] 00:23:07.455 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:23:07.455 + for M in /var/spdk/build-*-manifest.txt 00:23:07.455 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:23:07.455 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:23:07.713 + for M in /var/spdk/build-*-manifest.txt 00:23:07.713 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:23:07.713 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:23:07.713 + for M in /var/spdk/build-*-manifest.txt 00:23:07.713 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:23:07.713 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:23:07.713 ++ uname 00:23:07.713 + [[ Linux == \L\i\n\u\x ]] 00:23:07.713 + sudo dmesg -T 00:23:07.713 + sudo dmesg --clear 00:23:07.713 + dmesg_pid=5270 00:23:07.713 + sudo dmesg -Tw 00:23:07.713 + [[ Fedora Linux == FreeBSD ]] 00:23:07.713 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:23:07.713 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:23:07.713 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:23:07.713 + [[ -x /usr/src/fio-static/fio ]] 00:23:07.713 + export FIO_BIN=/usr/src/fio-static/fio 00:23:07.714 + FIO_BIN=/usr/src/fio-static/fio 00:23:07.714 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:23:07.714 + [[ ! -v VFIO_QEMU_BIN ]] 00:23:07.714 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:23:07.714 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:23:07.714 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:23:07.714 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:23:07.714 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:23:07.714 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:23:07.714 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:23:07.714 Test configuration: 00:23:07.714 SPDK_RUN_FUNCTIONAL_TEST=1 00:23:07.714 SPDK_TEST_NVMF=1 00:23:07.714 SPDK_TEST_NVMF_TRANSPORT=tcp 00:23:07.714 SPDK_TEST_USDT=1 00:23:07.714 SPDK_TEST_NVMF_MDNS=1 00:23:07.714 SPDK_RUN_UBSAN=1 00:23:07.714 NET_TYPE=virt 00:23:07.714 SPDK_JSONRPC_GO_CLIENT=1 00:23:07.714 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:23:07.714 RUN_NIGHTLY=0 20:22:02 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:23:07.714 20:22:02 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:07.714 20:22:02 -- scripts/common.sh@15 -- $ shopt -s extglob 00:23:07.714 20:22:02 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:23:07.714 20:22:02 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:07.714 20:22:02 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:07.714 20:22:02 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.714 20:22:02 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.714 20:22:02 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.714 20:22:02 -- paths/export.sh@5 -- $ export PATH 00:23:07.714 20:22:02 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.714 20:22:02 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:23:07.714 20:22:02 -- common/autobuild_common.sh@479 -- $ date +%s 00:23:07.714 20:22:02 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727814122.XXXXXX 00:23:07.714 20:22:02 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727814122.TpXGPN 00:23:07.714 20:22:02 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:23:07.714 20:22:02 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:23:07.714 20:22:02 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:23:07.714 20:22:02 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:23:07.714 20:22:02 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:23:07.714 20:22:02 -- common/autobuild_common.sh@495 -- $ get_config_params 00:23:07.714 20:22:02 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:23:07.714 20:22:02 -- common/autotest_common.sh@10 -- $ set +x 00:23:07.714 20:22:02 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:23:07.714 20:22:02 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:23:07.714 20:22:02 -- pm/common@17 -- $ local monitor 00:23:07.714 20:22:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:07.714 20:22:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:07.714 20:22:02 -- pm/common@25 -- $ sleep 1 00:23:07.714 20:22:02 -- pm/common@21 -- $ date +%s 00:23:07.714 20:22:02 -- pm/common@21 -- $ date +%s 00:23:07.714 20:22:02 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727814122 00:23:07.714 20:22:02 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727814122 00:23:07.714 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727814122_collect-vmstat.pm.log 00:23:07.714 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727814122_collect-cpu-load.pm.log 00:23:08.649 20:22:03 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:23:08.649 20:22:03 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:23:08.649 20:22:03 -- spdk/autobuild.sh@12 -- $ umask 022 00:23:08.649 20:22:03 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:23:08.649 20:22:03 -- spdk/autobuild.sh@16 -- $ date -u 00:23:08.906 Tue Oct 1 08:22:03 PM UTC 2024 00:23:08.906 20:22:03 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:23:08.906 v25.01-pre-28-g1b1c3081e 00:23:08.906 20:22:03 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:23:08.906 20:22:03 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:23:08.906 20:22:03 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:23:08.906 20:22:03 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:23:08.906 20:22:03 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:23:08.906 20:22:03 -- common/autotest_common.sh@10 -- $ set +x 00:23:08.906 ************************************ 00:23:08.906 START TEST ubsan 00:23:08.906 ************************************ 00:23:08.906 using ubsan 00:23:08.906 20:22:03 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:23:08.906 00:23:08.906 real 0m0.000s 00:23:08.906 user 0m0.000s 00:23:08.906 sys 0m0.000s 00:23:08.906 20:22:03 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:23:08.907 20:22:03 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:23:08.907 ************************************ 00:23:08.907 END TEST ubsan 00:23:08.907 ************************************ 00:23:08.907 20:22:03 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:23:08.907 20:22:03 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:23:08.907 20:22:03 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:23:08.907 20:22:03 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:23:08.907 20:22:03 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:23:08.907 20:22:03 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:23:08.907 20:22:03 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:23:08.907 20:22:03 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:23:08.907 20:22:03 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:23:08.907 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:23:08.907 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:23:09.511 Using 'verbs' RDMA provider 00:23:22.274 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:23:34.477 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:23:34.477 go version go1.21.1 linux/amd64 00:23:34.735 Creating mk/config.mk...done. 00:23:34.735 Creating mk/cc.flags.mk...done. 00:23:34.735 Type 'make' to build. 00:23:34.735 20:22:29 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:23:34.735 20:22:29 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:23:34.735 20:22:29 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:23:34.735 20:22:29 -- common/autotest_common.sh@10 -- $ set +x 00:23:34.735 ************************************ 00:23:34.735 START TEST make 00:23:34.735 ************************************ 00:23:34.735 20:22:29 make -- common/autotest_common.sh@1125 -- $ make -j10 00:23:34.993 make[1]: Nothing to be done for 'all'. 00:23:53.072 The Meson build system 00:23:53.072 Version: 1.5.0 00:23:53.072 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:23:53.072 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:23:53.072 Build type: native build 00:23:53.072 Program cat found: YES (/usr/bin/cat) 00:23:53.072 Project name: DPDK 00:23:53.072 Project version: 24.03.0 00:23:53.072 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:23:53.072 C linker for the host machine: cc ld.bfd 2.40-14 00:23:53.072 Host machine cpu family: x86_64 00:23:53.072 Host machine cpu: x86_64 00:23:53.072 Message: ## Building in Developer Mode ## 00:23:53.072 Program pkg-config found: YES (/usr/bin/pkg-config) 00:23:53.072 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:23:53.072 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:23:53.072 Program python3 found: YES (/usr/bin/python3) 00:23:53.072 Program cat found: YES (/usr/bin/cat) 00:23:53.072 Compiler for C supports arguments -march=native: YES 00:23:53.072 Checking for size of "void *" : 8 00:23:53.072 Checking for size of "void *" : 8 (cached) 00:23:53.072 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:23:53.072 Library m found: YES 00:23:53.072 Library numa found: YES 00:23:53.072 Has header "numaif.h" : YES 00:23:53.072 Library fdt found: NO 00:23:53.072 Library execinfo found: NO 00:23:53.072 Has header "execinfo.h" : YES 00:23:53.072 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:23:53.072 Run-time dependency libarchive found: NO (tried pkgconfig) 00:23:53.072 Run-time dependency libbsd found: NO (tried pkgconfig) 00:23:53.072 Run-time dependency jansson found: NO (tried pkgconfig) 00:23:53.072 Run-time dependency openssl found: YES 3.1.1 00:23:53.072 Run-time dependency libpcap found: YES 1.10.4 00:23:53.072 Has header "pcap.h" with dependency libpcap: YES 00:23:53.072 Compiler for C supports arguments -Wcast-qual: YES 00:23:53.072 Compiler for C supports arguments -Wdeprecated: YES 00:23:53.072 Compiler for C supports arguments -Wformat: YES 00:23:53.072 Compiler for C supports arguments -Wformat-nonliteral: NO 00:23:53.072 Compiler for C supports arguments -Wformat-security: NO 00:23:53.072 Compiler for C supports arguments -Wmissing-declarations: YES 00:23:53.072 Compiler for C supports arguments -Wmissing-prototypes: YES 00:23:53.072 Compiler for C supports arguments -Wnested-externs: YES 00:23:53.072 Compiler for C supports arguments -Wold-style-definition: YES 00:23:53.072 Compiler for C supports arguments -Wpointer-arith: YES 00:23:53.072 Compiler for C supports arguments -Wsign-compare: YES 00:23:53.072 Compiler for C supports arguments -Wstrict-prototypes: YES 00:23:53.072 Compiler for C supports arguments -Wundef: YES 00:23:53.072 Compiler for C supports arguments -Wwrite-strings: YES 00:23:53.072 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:23:53.072 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:23:53.072 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:23:53.072 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:23:53.072 Program objdump found: YES (/usr/bin/objdump) 00:23:53.072 Compiler for C supports arguments -mavx512f: YES 00:23:53.072 Checking if "AVX512 checking" compiles: YES 00:23:53.072 Fetching value of define "__SSE4_2__" : 1 00:23:53.072 Fetching value of define "__AES__" : 1 00:23:53.072 Fetching value of define "__AVX__" : 1 00:23:53.072 Fetching value of define "__AVX2__" : 1 00:23:53.072 Fetching value of define "__AVX512BW__" : (undefined) 00:23:53.072 Fetching value of define "__AVX512CD__" : (undefined) 00:23:53.072 Fetching value of define "__AVX512DQ__" : (undefined) 00:23:53.072 Fetching value of define "__AVX512F__" : (undefined) 00:23:53.072 Fetching value of define "__AVX512VL__" : (undefined) 00:23:53.072 Fetching value of define "__PCLMUL__" : 1 00:23:53.072 Fetching value of define "__RDRND__" : 1 00:23:53.072 Fetching value of define "__RDSEED__" : 1 00:23:53.072 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:23:53.072 Fetching value of define "__znver1__" : (undefined) 00:23:53.072 Fetching value of define "__znver2__" : (undefined) 00:23:53.072 Fetching value of define "__znver3__" : (undefined) 00:23:53.072 Fetching value of define "__znver4__" : (undefined) 00:23:53.072 Compiler for C supports arguments -Wno-format-truncation: YES 00:23:53.072 Message: lib/log: Defining dependency "log" 00:23:53.072 Message: lib/kvargs: Defining dependency "kvargs" 00:23:53.072 Message: lib/telemetry: Defining dependency "telemetry" 00:23:53.072 Checking for function "getentropy" : NO 00:23:53.072 Message: lib/eal: Defining dependency "eal" 00:23:53.072 Message: lib/ring: Defining dependency "ring" 00:23:53.072 Message: lib/rcu: Defining dependency "rcu" 00:23:53.072 Message: lib/mempool: Defining dependency "mempool" 00:23:53.072 Message: lib/mbuf: Defining dependency "mbuf" 00:23:53.072 Fetching value of define "__PCLMUL__" : 1 (cached) 00:23:53.072 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:23:53.072 Compiler for C supports arguments -mpclmul: YES 00:23:53.072 Compiler for C supports arguments -maes: YES 00:23:53.072 Compiler for C supports arguments -mavx512f: YES (cached) 00:23:53.072 Compiler for C supports arguments -mavx512bw: YES 00:23:53.072 Compiler for C supports arguments -mavx512dq: YES 00:23:53.072 Compiler for C supports arguments -mavx512vl: YES 00:23:53.072 Compiler for C supports arguments -mvpclmulqdq: YES 00:23:53.072 Compiler for C supports arguments -mavx2: YES 00:23:53.072 Compiler for C supports arguments -mavx: YES 00:23:53.072 Message: lib/net: Defining dependency "net" 00:23:53.072 Message: lib/meter: Defining dependency "meter" 00:23:53.072 Message: lib/ethdev: Defining dependency "ethdev" 00:23:53.072 Message: lib/pci: Defining dependency "pci" 00:23:53.072 Message: lib/cmdline: Defining dependency "cmdline" 00:23:53.072 Message: lib/hash: Defining dependency "hash" 00:23:53.072 Message: lib/timer: Defining dependency "timer" 00:23:53.072 Message: lib/compressdev: Defining dependency "compressdev" 00:23:53.072 Message: lib/cryptodev: Defining dependency "cryptodev" 00:23:53.072 Message: lib/dmadev: Defining dependency "dmadev" 00:23:53.072 Compiler for C supports arguments -Wno-cast-qual: YES 00:23:53.072 Message: lib/power: Defining dependency "power" 00:23:53.072 Message: lib/reorder: Defining dependency "reorder" 00:23:53.072 Message: lib/security: Defining dependency "security" 00:23:53.072 Has header "linux/userfaultfd.h" : YES 00:23:53.072 Has header "linux/vduse.h" : YES 00:23:53.072 Message: lib/vhost: Defining dependency "vhost" 00:23:53.072 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:23:53.072 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:23:53.072 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:23:53.073 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:23:53.073 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:23:53.073 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:23:53.073 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:23:53.073 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:23:53.073 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:23:53.073 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:23:53.073 Program doxygen found: YES (/usr/local/bin/doxygen) 00:23:53.073 Configuring doxy-api-html.conf using configuration 00:23:53.073 Configuring doxy-api-man.conf using configuration 00:23:53.073 Program mandb found: YES (/usr/bin/mandb) 00:23:53.073 Program sphinx-build found: NO 00:23:53.073 Configuring rte_build_config.h using configuration 00:23:53.073 Message: 00:23:53.073 ================= 00:23:53.073 Applications Enabled 00:23:53.073 ================= 00:23:53.073 00:23:53.073 apps: 00:23:53.073 00:23:53.073 00:23:53.073 Message: 00:23:53.073 ================= 00:23:53.073 Libraries Enabled 00:23:53.073 ================= 00:23:53.073 00:23:53.073 libs: 00:23:53.073 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:23:53.073 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:23:53.073 cryptodev, dmadev, power, reorder, security, vhost, 00:23:53.073 00:23:53.073 Message: 00:23:53.073 =============== 00:23:53.073 Drivers Enabled 00:23:53.073 =============== 00:23:53.073 00:23:53.073 common: 00:23:53.073 00:23:53.073 bus: 00:23:53.073 pci, vdev, 00:23:53.073 mempool: 00:23:53.073 ring, 00:23:53.073 dma: 00:23:53.073 00:23:53.073 net: 00:23:53.073 00:23:53.073 crypto: 00:23:53.073 00:23:53.073 compress: 00:23:53.073 00:23:53.073 vdpa: 00:23:53.073 00:23:53.073 00:23:53.073 Message: 00:23:53.073 ================= 00:23:53.073 Content Skipped 00:23:53.073 ================= 00:23:53.073 00:23:53.073 apps: 00:23:53.073 dumpcap: explicitly disabled via build config 00:23:53.073 graph: explicitly disabled via build config 00:23:53.073 pdump: explicitly disabled via build config 00:23:53.073 proc-info: explicitly disabled via build config 00:23:53.073 test-acl: explicitly disabled via build config 00:23:53.073 test-bbdev: explicitly disabled via build config 00:23:53.073 test-cmdline: explicitly disabled via build config 00:23:53.073 test-compress-perf: explicitly disabled via build config 00:23:53.073 test-crypto-perf: explicitly disabled via build config 00:23:53.073 test-dma-perf: explicitly disabled via build config 00:23:53.073 test-eventdev: explicitly disabled via build config 00:23:53.073 test-fib: explicitly disabled via build config 00:23:53.073 test-flow-perf: explicitly disabled via build config 00:23:53.073 test-gpudev: explicitly disabled via build config 00:23:53.073 test-mldev: explicitly disabled via build config 00:23:53.073 test-pipeline: explicitly disabled via build config 00:23:53.073 test-pmd: explicitly disabled via build config 00:23:53.073 test-regex: explicitly disabled via build config 00:23:53.073 test-sad: explicitly disabled via build config 00:23:53.073 test-security-perf: explicitly disabled via build config 00:23:53.073 00:23:53.073 libs: 00:23:53.073 argparse: explicitly disabled via build config 00:23:53.073 metrics: explicitly disabled via build config 00:23:53.073 acl: explicitly disabled via build config 00:23:53.073 bbdev: explicitly disabled via build config 00:23:53.073 bitratestats: explicitly disabled via build config 00:23:53.073 bpf: explicitly disabled via build config 00:23:53.073 cfgfile: explicitly disabled via build config 00:23:53.073 distributor: explicitly disabled via build config 00:23:53.073 efd: explicitly disabled via build config 00:23:53.073 eventdev: explicitly disabled via build config 00:23:53.073 dispatcher: explicitly disabled via build config 00:23:53.073 gpudev: explicitly disabled via build config 00:23:53.073 gro: explicitly disabled via build config 00:23:53.073 gso: explicitly disabled via build config 00:23:53.073 ip_frag: explicitly disabled via build config 00:23:53.073 jobstats: explicitly disabled via build config 00:23:53.073 latencystats: explicitly disabled via build config 00:23:53.073 lpm: explicitly disabled via build config 00:23:53.073 member: explicitly disabled via build config 00:23:53.073 pcapng: explicitly disabled via build config 00:23:53.073 rawdev: explicitly disabled via build config 00:23:53.073 regexdev: explicitly disabled via build config 00:23:53.073 mldev: explicitly disabled via build config 00:23:53.073 rib: explicitly disabled via build config 00:23:53.073 sched: explicitly disabled via build config 00:23:53.073 stack: explicitly disabled via build config 00:23:53.073 ipsec: explicitly disabled via build config 00:23:53.073 pdcp: explicitly disabled via build config 00:23:53.073 fib: explicitly disabled via build config 00:23:53.073 port: explicitly disabled via build config 00:23:53.073 pdump: explicitly disabled via build config 00:23:53.073 table: explicitly disabled via build config 00:23:53.073 pipeline: explicitly disabled via build config 00:23:53.073 graph: explicitly disabled via build config 00:23:53.073 node: explicitly disabled via build config 00:23:53.073 00:23:53.073 drivers: 00:23:53.073 common/cpt: not in enabled drivers build config 00:23:53.073 common/dpaax: not in enabled drivers build config 00:23:53.073 common/iavf: not in enabled drivers build config 00:23:53.073 common/idpf: not in enabled drivers build config 00:23:53.073 common/ionic: not in enabled drivers build config 00:23:53.073 common/mvep: not in enabled drivers build config 00:23:53.073 common/octeontx: not in enabled drivers build config 00:23:53.073 bus/auxiliary: not in enabled drivers build config 00:23:53.073 bus/cdx: not in enabled drivers build config 00:23:53.073 bus/dpaa: not in enabled drivers build config 00:23:53.073 bus/fslmc: not in enabled drivers build config 00:23:53.073 bus/ifpga: not in enabled drivers build config 00:23:53.073 bus/platform: not in enabled drivers build config 00:23:53.073 bus/uacce: not in enabled drivers build config 00:23:53.073 bus/vmbus: not in enabled drivers build config 00:23:53.073 common/cnxk: not in enabled drivers build config 00:23:53.073 common/mlx5: not in enabled drivers build config 00:23:53.073 common/nfp: not in enabled drivers build config 00:23:53.073 common/nitrox: not in enabled drivers build config 00:23:53.073 common/qat: not in enabled drivers build config 00:23:53.073 common/sfc_efx: not in enabled drivers build config 00:23:53.073 mempool/bucket: not in enabled drivers build config 00:23:53.073 mempool/cnxk: not in enabled drivers build config 00:23:53.073 mempool/dpaa: not in enabled drivers build config 00:23:53.073 mempool/dpaa2: not in enabled drivers build config 00:23:53.073 mempool/octeontx: not in enabled drivers build config 00:23:53.073 mempool/stack: not in enabled drivers build config 00:23:53.073 dma/cnxk: not in enabled drivers build config 00:23:53.073 dma/dpaa: not in enabled drivers build config 00:23:53.073 dma/dpaa2: not in enabled drivers build config 00:23:53.073 dma/hisilicon: not in enabled drivers build config 00:23:53.073 dma/idxd: not in enabled drivers build config 00:23:53.073 dma/ioat: not in enabled drivers build config 00:23:53.073 dma/skeleton: not in enabled drivers build config 00:23:53.073 net/af_packet: not in enabled drivers build config 00:23:53.073 net/af_xdp: not in enabled drivers build config 00:23:53.073 net/ark: not in enabled drivers build config 00:23:53.073 net/atlantic: not in enabled drivers build config 00:23:53.073 net/avp: not in enabled drivers build config 00:23:53.073 net/axgbe: not in enabled drivers build config 00:23:53.073 net/bnx2x: not in enabled drivers build config 00:23:53.073 net/bnxt: not in enabled drivers build config 00:23:53.073 net/bonding: not in enabled drivers build config 00:23:53.073 net/cnxk: not in enabled drivers build config 00:23:53.073 net/cpfl: not in enabled drivers build config 00:23:53.073 net/cxgbe: not in enabled drivers build config 00:23:53.073 net/dpaa: not in enabled drivers build config 00:23:53.073 net/dpaa2: not in enabled drivers build config 00:23:53.073 net/e1000: not in enabled drivers build config 00:23:53.073 net/ena: not in enabled drivers build config 00:23:53.073 net/enetc: not in enabled drivers build config 00:23:53.073 net/enetfec: not in enabled drivers build config 00:23:53.073 net/enic: not in enabled drivers build config 00:23:53.073 net/failsafe: not in enabled drivers build config 00:23:53.073 net/fm10k: not in enabled drivers build config 00:23:53.073 net/gve: not in enabled drivers build config 00:23:53.073 net/hinic: not in enabled drivers build config 00:23:53.073 net/hns3: not in enabled drivers build config 00:23:53.073 net/i40e: not in enabled drivers build config 00:23:53.073 net/iavf: not in enabled drivers build config 00:23:53.073 net/ice: not in enabled drivers build config 00:23:53.073 net/idpf: not in enabled drivers build config 00:23:53.073 net/igc: not in enabled drivers build config 00:23:53.073 net/ionic: not in enabled drivers build config 00:23:53.073 net/ipn3ke: not in enabled drivers build config 00:23:53.073 net/ixgbe: not in enabled drivers build config 00:23:53.073 net/mana: not in enabled drivers build config 00:23:53.073 net/memif: not in enabled drivers build config 00:23:53.073 net/mlx4: not in enabled drivers build config 00:23:53.073 net/mlx5: not in enabled drivers build config 00:23:53.073 net/mvneta: not in enabled drivers build config 00:23:53.073 net/mvpp2: not in enabled drivers build config 00:23:53.073 net/netvsc: not in enabled drivers build config 00:23:53.073 net/nfb: not in enabled drivers build config 00:23:53.073 net/nfp: not in enabled drivers build config 00:23:53.073 net/ngbe: not in enabled drivers build config 00:23:53.073 net/null: not in enabled drivers build config 00:23:53.073 net/octeontx: not in enabled drivers build config 00:23:53.073 net/octeon_ep: not in enabled drivers build config 00:23:53.073 net/pcap: not in enabled drivers build config 00:23:53.073 net/pfe: not in enabled drivers build config 00:23:53.073 net/qede: not in enabled drivers build config 00:23:53.073 net/ring: not in enabled drivers build config 00:23:53.073 net/sfc: not in enabled drivers build config 00:23:53.074 net/softnic: not in enabled drivers build config 00:23:53.074 net/tap: not in enabled drivers build config 00:23:53.074 net/thunderx: not in enabled drivers build config 00:23:53.074 net/txgbe: not in enabled drivers build config 00:23:53.074 net/vdev_netvsc: not in enabled drivers build config 00:23:53.074 net/vhost: not in enabled drivers build config 00:23:53.074 net/virtio: not in enabled drivers build config 00:23:53.074 net/vmxnet3: not in enabled drivers build config 00:23:53.074 raw/*: missing internal dependency, "rawdev" 00:23:53.074 crypto/armv8: not in enabled drivers build config 00:23:53.074 crypto/bcmfs: not in enabled drivers build config 00:23:53.074 crypto/caam_jr: not in enabled drivers build config 00:23:53.074 crypto/ccp: not in enabled drivers build config 00:23:53.074 crypto/cnxk: not in enabled drivers build config 00:23:53.074 crypto/dpaa_sec: not in enabled drivers build config 00:23:53.074 crypto/dpaa2_sec: not in enabled drivers build config 00:23:53.074 crypto/ipsec_mb: not in enabled drivers build config 00:23:53.074 crypto/mlx5: not in enabled drivers build config 00:23:53.074 crypto/mvsam: not in enabled drivers build config 00:23:53.074 crypto/nitrox: not in enabled drivers build config 00:23:53.074 crypto/null: not in enabled drivers build config 00:23:53.074 crypto/octeontx: not in enabled drivers build config 00:23:53.074 crypto/openssl: not in enabled drivers build config 00:23:53.074 crypto/scheduler: not in enabled drivers build config 00:23:53.074 crypto/uadk: not in enabled drivers build config 00:23:53.074 crypto/virtio: not in enabled drivers build config 00:23:53.074 compress/isal: not in enabled drivers build config 00:23:53.074 compress/mlx5: not in enabled drivers build config 00:23:53.074 compress/nitrox: not in enabled drivers build config 00:23:53.074 compress/octeontx: not in enabled drivers build config 00:23:53.074 compress/zlib: not in enabled drivers build config 00:23:53.074 regex/*: missing internal dependency, "regexdev" 00:23:53.074 ml/*: missing internal dependency, "mldev" 00:23:53.074 vdpa/ifc: not in enabled drivers build config 00:23:53.074 vdpa/mlx5: not in enabled drivers build config 00:23:53.074 vdpa/nfp: not in enabled drivers build config 00:23:53.074 vdpa/sfc: not in enabled drivers build config 00:23:53.074 event/*: missing internal dependency, "eventdev" 00:23:53.074 baseband/*: missing internal dependency, "bbdev" 00:23:53.074 gpu/*: missing internal dependency, "gpudev" 00:23:53.074 00:23:53.074 00:23:53.074 Build targets in project: 85 00:23:53.074 00:23:53.074 DPDK 24.03.0 00:23:53.074 00:23:53.074 User defined options 00:23:53.074 buildtype : debug 00:23:53.074 default_library : shared 00:23:53.074 libdir : lib 00:23:53.074 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:23:53.074 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:23:53.074 c_link_args : 00:23:53.074 cpu_instruction_set: native 00:23:53.074 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:23:53.074 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:23:53.074 enable_docs : false 00:23:53.074 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:23:53.074 enable_kmods : false 00:23:53.074 max_lcores : 128 00:23:53.074 tests : false 00:23:53.074 00:23:53.074 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:23:53.074 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:23:53.074 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:23:53.332 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:23:53.332 [3/268] Linking static target lib/librte_log.a 00:23:53.332 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:23:53.332 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:23:53.332 [6/268] Linking static target lib/librte_kvargs.a 00:23:53.898 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:23:53.898 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:23:54.156 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:23:54.156 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:23:54.156 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:23:54.156 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:23:54.156 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:23:54.417 [14/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:23:54.417 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:23:54.417 [16/268] Linking target lib/librte_log.so.24.1 00:23:54.417 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:23:54.676 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:23:54.676 [19/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:23:54.676 [20/268] Linking static target lib/librte_telemetry.a 00:23:54.934 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:23:54.934 [22/268] Linking target lib/librte_kvargs.so.24.1 00:23:54.934 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:23:55.192 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:23:55.192 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:23:55.192 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:23:55.450 [27/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:23:55.450 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:23:55.450 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:23:55.450 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:23:55.450 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:23:55.709 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:23:55.709 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:23:55.709 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:23:55.709 [35/268] Linking target lib/librte_telemetry.so.24.1 00:23:55.968 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:23:55.968 [37/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:23:56.226 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:23:56.226 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:23:56.485 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:23:56.485 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:23:56.485 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:23:56.485 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:23:56.485 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:23:56.485 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:23:56.485 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:23:56.743 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:23:56.743 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:23:56.743 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:23:57.001 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:23:57.260 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:23:57.517 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:23:57.517 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:23:57.517 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:23:57.774 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:23:57.774 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:23:57.774 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:23:57.774 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:23:58.032 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:23:58.032 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:23:58.294 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:23:58.294 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:23:58.294 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:23:58.552 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:23:58.552 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:23:58.812 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:23:58.812 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:23:59.070 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:23:59.070 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:23:59.329 [70/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:23:59.329 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:23:59.329 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:23:59.329 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:23:59.586 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:23:59.586 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:23:59.845 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:23:59.845 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:24:00.104 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:24:00.104 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:24:00.104 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:24:00.104 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:24:00.105 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:24:00.363 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:24:00.623 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:24:00.623 [85/268] Linking static target lib/librte_eal.a 00:24:00.623 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:24:00.881 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:24:01.139 [88/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:24:01.139 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:24:01.139 [90/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:24:01.139 [91/268] Linking static target lib/librte_ring.a 00:24:01.139 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:24:01.397 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:24:01.397 [94/268] Linking static target lib/librte_mempool.a 00:24:01.397 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:24:01.656 [96/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:24:01.656 [97/268] Linking static target lib/librte_rcu.a 00:24:01.656 [98/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:24:01.914 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:24:01.914 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:24:01.914 [101/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:24:01.914 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:24:02.172 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:24:02.172 [104/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:24:02.172 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:24:02.172 [106/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:24:02.172 [107/268] Linking static target lib/librte_mbuf.a 00:24:02.430 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:24:02.430 [109/268] Linking static target lib/librte_net.a 00:24:02.688 [110/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:24:02.688 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:24:02.946 [112/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:24:02.946 [113/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:24:02.946 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:24:02.946 [115/268] Linking static target lib/librte_meter.a 00:24:02.946 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:24:03.204 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:24:03.462 [118/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:24:03.462 [119/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:24:03.462 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:24:03.720 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:24:03.977 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:24:03.977 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:24:04.234 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:24:04.234 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:24:04.493 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:24:04.493 [127/268] Linking static target lib/librte_pci.a 00:24:04.493 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:24:04.493 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:24:04.751 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:24:04.751 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:24:04.751 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:24:04.751 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:24:04.751 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:24:04.751 [135/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:24:04.751 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:24:05.009 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:24:05.009 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:24:05.009 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:24:05.009 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:24:05.009 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:24:05.009 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:24:05.009 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:24:05.009 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:24:05.009 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:24:05.267 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:24:05.527 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:24:05.527 [148/268] Linking static target lib/librte_cmdline.a 00:24:05.785 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:24:05.785 [150/268] Linking static target lib/librte_ethdev.a 00:24:05.785 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:24:05.785 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:24:06.043 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:24:06.043 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:24:06.302 [155/268] Linking static target lib/librte_timer.a 00:24:06.302 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:24:06.560 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:24:06.560 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:24:06.560 [159/268] Linking static target lib/librte_compressdev.a 00:24:06.819 [160/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:24:06.819 [161/268] Linking static target lib/librte_hash.a 00:24:07.078 [162/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:24:07.078 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:24:07.336 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:24:07.336 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:24:07.336 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:24:07.594 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:24:07.594 [168/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:24:07.852 [169/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:24:07.852 [170/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:24:08.109 [171/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:24:08.109 [172/268] Linking static target lib/librte_dmadev.a 00:24:08.110 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:24:08.367 [174/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:24:08.367 [175/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:24:08.367 [176/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:24:08.367 [177/268] Linking static target lib/librte_cryptodev.a 00:24:08.367 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:24:08.625 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:24:08.883 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:24:08.883 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:24:08.883 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:24:09.141 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:24:09.141 [184/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:24:09.141 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:24:09.141 [186/268] Linking static target lib/librte_power.a 00:24:09.399 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:24:09.399 [188/268] Linking static target lib/librte_reorder.a 00:24:09.658 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:24:09.658 [190/268] Linking static target lib/librte_security.a 00:24:09.915 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:24:09.915 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:24:09.915 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:24:10.173 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:24:10.173 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:24:10.430 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:24:10.688 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:24:10.688 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:24:10.945 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:24:10.945 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:24:10.945 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:24:11.202 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:24:11.202 [203/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:24:11.202 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:24:11.769 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:24:11.769 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:24:11.769 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:24:12.027 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:24:12.027 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:24:12.027 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:24:12.027 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:24:12.285 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:24:12.285 [213/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:24:12.285 [214/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:24:12.285 [215/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:24:12.285 [216/268] Linking static target drivers/librte_bus_pci.a 00:24:12.285 [217/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:24:12.285 [218/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:24:12.285 [219/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:24:12.285 [220/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:24:12.285 [221/268] Linking static target drivers/librte_bus_vdev.a 00:24:12.543 [222/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:24:12.543 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:24:12.543 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:24:12.543 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:24:12.543 [226/268] Linking static target drivers/librte_mempool_ring.a 00:24:12.802 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:24:13.060 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:24:13.628 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:24:13.628 [230/268] Linking static target lib/librte_vhost.a 00:24:13.887 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:24:13.887 [232/268] Linking target lib/librte_eal.so.24.1 00:24:14.145 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:24:14.145 [234/268] Linking target lib/librte_ring.so.24.1 00:24:14.145 [235/268] Linking target lib/librte_pci.so.24.1 00:24:14.145 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:24:14.145 [237/268] Linking target lib/librte_meter.so.24.1 00:24:14.145 [238/268] Linking target lib/librte_dmadev.so.24.1 00:24:14.145 [239/268] Linking target lib/librte_timer.so.24.1 00:24:14.404 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:24:14.404 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:24:14.404 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:24:14.404 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:24:14.404 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:24:14.404 [245/268] Linking target lib/librte_rcu.so.24.1 00:24:14.404 [246/268] Linking target lib/librte_mempool.so.24.1 00:24:14.404 [247/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:24:14.662 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:24:14.662 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:24:14.662 [250/268] Linking target lib/librte_mbuf.so.24.1 00:24:14.662 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:24:14.662 [252/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:24:14.921 [253/268] Linking target lib/librte_compressdev.so.24.1 00:24:14.921 [254/268] Linking target lib/librte_reorder.so.24.1 00:24:14.921 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:24:14.921 [256/268] Linking target lib/librte_net.so.24.1 00:24:14.921 [257/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:24:14.921 [258/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:24:14.921 [259/268] Linking target lib/librte_hash.so.24.1 00:24:14.921 [260/268] Linking target lib/librte_cmdline.so.24.1 00:24:14.921 [261/268] Linking target lib/librte_security.so.24.1 00:24:15.181 [262/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:24:15.181 [263/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:24:15.181 [264/268] Linking target lib/librte_ethdev.so.24.1 00:24:15.181 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:24:15.181 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:24:15.441 [267/268] Linking target lib/librte_power.so.24.1 00:24:15.441 [268/268] Linking target lib/librte_vhost.so.24.1 00:24:15.441 INFO: autodetecting backend as ninja 00:24:15.441 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:24:42.015 CC lib/log/log.o 00:24:42.015 CC lib/ut_mock/mock.o 00:24:42.015 CC lib/log/log_deprecated.o 00:24:42.015 CC lib/log/log_flags.o 00:24:42.015 CC lib/ut/ut.o 00:24:42.015 LIB libspdk_ut_mock.a 00:24:42.015 LIB libspdk_ut.a 00:24:42.015 SO libspdk_ut.so.2.0 00:24:42.015 SO libspdk_ut_mock.so.6.0 00:24:42.015 LIB libspdk_log.a 00:24:42.015 SO libspdk_log.so.7.0 00:24:42.015 SYMLINK libspdk_ut.so 00:24:42.015 SYMLINK libspdk_ut_mock.so 00:24:42.015 SYMLINK libspdk_log.so 00:24:42.015 CC lib/util/base64.o 00:24:42.015 CC lib/util/bit_array.o 00:24:42.015 CC lib/util/cpuset.o 00:24:42.015 CC lib/util/crc16.o 00:24:42.015 CC lib/util/crc32.o 00:24:42.015 CC lib/util/crc32c.o 00:24:42.015 CXX lib/trace_parser/trace.o 00:24:42.015 CC lib/dma/dma.o 00:24:42.015 CC lib/ioat/ioat.o 00:24:42.015 CC lib/vfio_user/host/vfio_user_pci.o 00:24:42.015 CC lib/vfio_user/host/vfio_user.o 00:24:42.015 LIB libspdk_dma.a 00:24:42.015 CC lib/util/crc32_ieee.o 00:24:42.015 CC lib/util/crc64.o 00:24:42.015 SO libspdk_dma.so.5.0 00:24:42.015 CC lib/util/dif.o 00:24:42.015 CC lib/util/fd_group.o 00:24:42.015 CC lib/util/fd.o 00:24:42.015 SYMLINK libspdk_dma.so 00:24:42.015 CC lib/util/file.o 00:24:42.015 LIB libspdk_ioat.a 00:24:42.015 SO libspdk_ioat.so.7.0 00:24:42.015 SYMLINK libspdk_ioat.so 00:24:42.015 CC lib/util/hexlify.o 00:24:42.015 CC lib/util/iov.o 00:24:42.015 CC lib/util/math.o 00:24:42.273 CC lib/util/net.o 00:24:42.273 CC lib/util/pipe.o 00:24:42.273 CC lib/util/strerror_tls.o 00:24:42.273 CC lib/util/string.o 00:24:42.273 LIB libspdk_vfio_user.a 00:24:42.273 CC lib/util/uuid.o 00:24:42.273 CC lib/util/xor.o 00:24:42.273 SO libspdk_vfio_user.so.5.0 00:24:42.273 CC lib/util/zipf.o 00:24:42.273 CC lib/util/md5.o 00:24:42.273 SYMLINK libspdk_vfio_user.so 00:24:42.839 LIB libspdk_util.a 00:24:42.839 LIB libspdk_trace_parser.a 00:24:42.839 SO libspdk_trace_parser.so.6.0 00:24:42.839 SO libspdk_util.so.10.0 00:24:42.839 SYMLINK libspdk_trace_parser.so 00:24:43.096 SYMLINK libspdk_util.so 00:24:43.096 CC lib/env_dpdk/memory.o 00:24:43.096 CC lib/env_dpdk/env.o 00:24:43.096 CC lib/env_dpdk/pci.o 00:24:43.096 CC lib/idxd/idxd.o 00:24:43.096 CC lib/env_dpdk/init.o 00:24:43.096 CC lib/vmd/vmd.o 00:24:43.354 CC lib/conf/conf.o 00:24:43.354 CC lib/json/json_parse.o 00:24:43.354 CC lib/rdma_provider/common.o 00:24:43.354 CC lib/rdma_utils/rdma_utils.o 00:24:43.354 CC lib/rdma_provider/rdma_provider_verbs.o 00:24:43.611 LIB libspdk_conf.a 00:24:43.611 SO libspdk_conf.so.6.0 00:24:43.611 CC lib/env_dpdk/threads.o 00:24:43.611 LIB libspdk_rdma_utils.a 00:24:43.611 CC lib/json/json_util.o 00:24:43.611 LIB libspdk_rdma_provider.a 00:24:43.611 SO libspdk_rdma_utils.so.1.0 00:24:43.611 SYMLINK libspdk_conf.so 00:24:43.611 SO libspdk_rdma_provider.so.6.0 00:24:43.611 CC lib/json/json_write.o 00:24:43.611 CC lib/idxd/idxd_user.o 00:24:43.611 SYMLINK libspdk_rdma_utils.so 00:24:43.611 CC lib/idxd/idxd_kernel.o 00:24:43.869 SYMLINK libspdk_rdma_provider.so 00:24:43.869 CC lib/env_dpdk/pci_ioat.o 00:24:43.869 CC lib/env_dpdk/pci_virtio.o 00:24:43.869 CC lib/vmd/led.o 00:24:43.869 CC lib/env_dpdk/pci_vmd.o 00:24:43.869 CC lib/env_dpdk/pci_idxd.o 00:24:43.869 CC lib/env_dpdk/pci_event.o 00:24:43.869 LIB libspdk_idxd.a 00:24:43.869 CC lib/env_dpdk/sigbus_handler.o 00:24:44.126 CC lib/env_dpdk/pci_dpdk.o 00:24:44.126 SO libspdk_idxd.so.12.1 00:24:44.126 CC lib/env_dpdk/pci_dpdk_2207.o 00:24:44.126 LIB libspdk_json.a 00:24:44.126 CC lib/env_dpdk/pci_dpdk_2211.o 00:24:44.126 SYMLINK libspdk_idxd.so 00:24:44.126 SO libspdk_json.so.6.0 00:24:44.126 SYMLINK libspdk_json.so 00:24:44.385 LIB libspdk_vmd.a 00:24:44.385 SO libspdk_vmd.so.6.0 00:24:44.385 SYMLINK libspdk_vmd.so 00:24:44.385 CC lib/jsonrpc/jsonrpc_server.o 00:24:44.385 CC lib/jsonrpc/jsonrpc_client.o 00:24:44.385 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:24:44.385 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:24:44.644 LIB libspdk_jsonrpc.a 00:24:44.902 SO libspdk_jsonrpc.so.6.0 00:24:44.902 LIB libspdk_env_dpdk.a 00:24:44.902 SYMLINK libspdk_jsonrpc.so 00:24:44.902 SO libspdk_env_dpdk.so.15.0 00:24:45.179 CC lib/rpc/rpc.o 00:24:45.179 SYMLINK libspdk_env_dpdk.so 00:24:45.438 LIB libspdk_rpc.a 00:24:45.438 SO libspdk_rpc.so.6.0 00:24:45.438 SYMLINK libspdk_rpc.so 00:24:45.696 CC lib/notify/notify.o 00:24:45.696 CC lib/notify/notify_rpc.o 00:24:45.696 CC lib/trace/trace.o 00:24:45.696 CC lib/trace/trace_flags.o 00:24:45.696 CC lib/trace/trace_rpc.o 00:24:45.696 CC lib/keyring/keyring.o 00:24:45.696 CC lib/keyring/keyring_rpc.o 00:24:45.696 LIB libspdk_notify.a 00:24:45.954 SO libspdk_notify.so.6.0 00:24:45.954 SYMLINK libspdk_notify.so 00:24:45.954 LIB libspdk_trace.a 00:24:45.954 LIB libspdk_keyring.a 00:24:45.954 SO libspdk_trace.so.11.0 00:24:45.954 SO libspdk_keyring.so.2.0 00:24:45.954 SYMLINK libspdk_trace.so 00:24:46.213 SYMLINK libspdk_keyring.so 00:24:46.213 CC lib/sock/sock.o 00:24:46.213 CC lib/sock/sock_rpc.o 00:24:46.213 CC lib/thread/thread.o 00:24:46.213 CC lib/thread/iobuf.o 00:24:46.779 LIB libspdk_sock.a 00:24:46.779 SO libspdk_sock.so.10.0 00:24:47.040 SYMLINK libspdk_sock.so 00:24:47.299 CC lib/nvme/nvme_ctrlr_cmd.o 00:24:47.299 CC lib/nvme/nvme_fabric.o 00:24:47.299 CC lib/nvme/nvme_ctrlr.o 00:24:47.299 CC lib/nvme/nvme_ns_cmd.o 00:24:47.299 CC lib/nvme/nvme_ns.o 00:24:47.299 CC lib/nvme/nvme_pcie_common.o 00:24:47.299 CC lib/nvme/nvme_pcie.o 00:24:47.299 CC lib/nvme/nvme_qpair.o 00:24:47.299 CC lib/nvme/nvme.o 00:24:47.922 LIB libspdk_thread.a 00:24:47.922 CC lib/nvme/nvme_quirks.o 00:24:47.922 SO libspdk_thread.so.10.1 00:24:48.180 CC lib/nvme/nvme_transport.o 00:24:48.180 SYMLINK libspdk_thread.so 00:24:48.180 CC lib/nvme/nvme_discovery.o 00:24:48.180 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:24:48.180 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:24:48.438 CC lib/nvme/nvme_tcp.o 00:24:48.438 CC lib/nvme/nvme_opal.o 00:24:48.696 CC lib/accel/accel.o 00:24:48.696 CC lib/accel/accel_rpc.o 00:24:48.696 CC lib/blob/blobstore.o 00:24:48.696 CC lib/blob/request.o 00:24:48.954 CC lib/accel/accel_sw.o 00:24:48.954 CC lib/nvme/nvme_io_msg.o 00:24:48.954 CC lib/blob/zeroes.o 00:24:48.954 CC lib/blob/blob_bs_dev.o 00:24:49.212 CC lib/nvme/nvme_poll_group.o 00:24:49.212 CC lib/init/json_config.o 00:24:49.212 CC lib/virtio/virtio.o 00:24:49.470 CC lib/fsdev/fsdev.o 00:24:49.470 CC lib/init/subsystem.o 00:24:49.728 CC lib/init/subsystem_rpc.o 00:24:49.728 CC lib/virtio/virtio_vhost_user.o 00:24:49.728 CC lib/virtio/virtio_vfio_user.o 00:24:49.728 CC lib/virtio/virtio_pci.o 00:24:49.728 CC lib/init/rpc.o 00:24:49.987 CC lib/fsdev/fsdev_io.o 00:24:49.987 LIB libspdk_accel.a 00:24:49.987 LIB libspdk_init.a 00:24:49.987 CC lib/fsdev/fsdev_rpc.o 00:24:49.987 SO libspdk_accel.so.16.0 00:24:49.987 SO libspdk_init.so.6.0 00:24:49.987 CC lib/nvme/nvme_zns.o 00:24:49.987 SYMLINK libspdk_accel.so 00:24:49.987 CC lib/nvme/nvme_stubs.o 00:24:49.987 CC lib/nvme/nvme_auth.o 00:24:49.987 SYMLINK libspdk_init.so 00:24:49.987 CC lib/nvme/nvme_cuse.o 00:24:50.246 LIB libspdk_virtio.a 00:24:50.246 SO libspdk_virtio.so.7.0 00:24:50.246 CC lib/bdev/bdev.o 00:24:50.246 SYMLINK libspdk_virtio.so 00:24:50.246 CC lib/nvme/nvme_rdma.o 00:24:50.246 CC lib/event/app.o 00:24:50.505 LIB libspdk_fsdev.a 00:24:50.505 SO libspdk_fsdev.so.1.0 00:24:50.505 SYMLINK libspdk_fsdev.so 00:24:50.505 CC lib/event/reactor.o 00:24:50.762 CC lib/event/log_rpc.o 00:24:50.762 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:24:51.041 CC lib/event/app_rpc.o 00:24:51.041 CC lib/event/scheduler_static.o 00:24:51.041 CC lib/bdev/bdev_rpc.o 00:24:51.041 CC lib/bdev/bdev_zone.o 00:24:51.041 CC lib/bdev/part.o 00:24:51.300 CC lib/bdev/scsi_nvme.o 00:24:51.300 LIB libspdk_event.a 00:24:51.300 SO libspdk_event.so.14.0 00:24:51.300 SYMLINK libspdk_event.so 00:24:51.866 LIB libspdk_nvme.a 00:24:51.866 LIB libspdk_fuse_dispatcher.a 00:24:51.866 SO libspdk_fuse_dispatcher.so.1.0 00:24:51.866 SYMLINK libspdk_fuse_dispatcher.so 00:24:51.866 SO libspdk_nvme.so.14.0 00:24:52.124 SYMLINK libspdk_nvme.so 00:24:53.059 LIB libspdk_bdev.a 00:24:53.317 SO libspdk_bdev.so.17.0 00:24:53.317 LIB libspdk_blob.a 00:24:53.317 SO libspdk_blob.so.11.0 00:24:53.317 SYMLINK libspdk_bdev.so 00:24:53.317 SYMLINK libspdk_blob.so 00:24:53.576 CC lib/ublk/ublk.o 00:24:53.576 CC lib/ftl/ftl_layout.o 00:24:53.576 CC lib/ftl/ftl_init.o 00:24:53.576 CC lib/ftl/ftl_core.o 00:24:53.576 CC lib/ublk/ublk_rpc.o 00:24:53.576 CC lib/scsi/dev.o 00:24:53.576 CC lib/nvmf/ctrlr.o 00:24:53.576 CC lib/nbd/nbd.o 00:24:53.576 CC lib/lvol/lvol.o 00:24:53.576 CC lib/blobfs/blobfs.o 00:24:53.834 CC lib/nvmf/ctrlr_discovery.o 00:24:53.834 CC lib/scsi/lun.o 00:24:53.834 CC lib/scsi/port.o 00:24:53.834 CC lib/scsi/scsi.o 00:24:54.106 CC lib/ftl/ftl_debug.o 00:24:54.106 CC lib/scsi/scsi_bdev.o 00:24:54.106 CC lib/scsi/scsi_pr.o 00:24:54.373 CC lib/nbd/nbd_rpc.o 00:24:54.373 LIB libspdk_ublk.a 00:24:54.373 CC lib/scsi/scsi_rpc.o 00:24:54.373 SO libspdk_ublk.so.3.0 00:24:54.373 CC lib/ftl/ftl_io.o 00:24:54.373 CC lib/scsi/task.o 00:24:54.373 SYMLINK libspdk_ublk.so 00:24:54.373 CC lib/blobfs/tree.o 00:24:54.373 LIB libspdk_nbd.a 00:24:54.373 CC lib/ftl/ftl_sb.o 00:24:54.373 SO libspdk_nbd.so.7.0 00:24:54.631 SYMLINK libspdk_nbd.so 00:24:54.631 CC lib/ftl/ftl_l2p.o 00:24:54.631 CC lib/ftl/ftl_l2p_flat.o 00:24:54.631 CC lib/ftl/ftl_nv_cache.o 00:24:54.631 CC lib/ftl/ftl_band.o 00:24:54.631 CC lib/ftl/ftl_band_ops.o 00:24:54.631 CC lib/ftl/ftl_writer.o 00:24:54.890 LIB libspdk_blobfs.a 00:24:54.890 LIB libspdk_scsi.a 00:24:54.890 SO libspdk_blobfs.so.10.0 00:24:54.890 SO libspdk_scsi.so.9.0 00:24:54.890 LIB libspdk_lvol.a 00:24:54.890 SYMLINK libspdk_blobfs.so 00:24:54.890 CC lib/ftl/ftl_rq.o 00:24:54.890 SO libspdk_lvol.so.10.0 00:24:54.890 CC lib/ftl/ftl_reloc.o 00:24:54.890 SYMLINK libspdk_scsi.so 00:24:54.890 CC lib/ftl/ftl_l2p_cache.o 00:24:54.890 CC lib/nvmf/ctrlr_bdev.o 00:24:55.148 SYMLINK libspdk_lvol.so 00:24:55.148 CC lib/ftl/ftl_p2l.o 00:24:55.148 CC lib/vhost/vhost.o 00:24:55.148 CC lib/iscsi/conn.o 00:24:55.406 CC lib/iscsi/init_grp.o 00:24:55.406 CC lib/iscsi/iscsi.o 00:24:55.406 CC lib/iscsi/param.o 00:24:55.406 CC lib/iscsi/portal_grp.o 00:24:55.406 CC lib/iscsi/tgt_node.o 00:24:55.664 CC lib/iscsi/iscsi_subsystem.o 00:24:55.664 CC lib/iscsi/iscsi_rpc.o 00:24:55.664 CC lib/ftl/ftl_p2l_log.o 00:24:55.922 CC lib/ftl/mngt/ftl_mngt.o 00:24:55.922 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:24:55.922 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:24:55.922 CC lib/ftl/mngt/ftl_mngt_startup.o 00:24:55.922 CC lib/vhost/vhost_rpc.o 00:24:56.179 CC lib/ftl/mngt/ftl_mngt_md.o 00:24:56.179 CC lib/iscsi/task.o 00:24:56.179 CC lib/vhost/vhost_scsi.o 00:24:56.179 CC lib/nvmf/subsystem.o 00:24:56.179 CC lib/ftl/mngt/ftl_mngt_misc.o 00:24:56.179 CC lib/vhost/vhost_blk.o 00:24:56.437 CC lib/vhost/rte_vhost_user.o 00:24:56.437 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:24:56.437 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:24:56.437 CC lib/ftl/mngt/ftl_mngt_band.o 00:24:56.437 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:24:56.693 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:24:56.693 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:24:56.693 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:24:56.693 CC lib/ftl/utils/ftl_conf.o 00:24:56.950 CC lib/nvmf/nvmf.o 00:24:56.950 LIB libspdk_iscsi.a 00:24:56.950 SO libspdk_iscsi.so.8.0 00:24:56.950 CC lib/nvmf/nvmf_rpc.o 00:24:57.208 CC lib/nvmf/transport.o 00:24:57.208 CC lib/nvmf/tcp.o 00:24:57.208 CC lib/nvmf/stubs.o 00:24:57.208 SYMLINK libspdk_iscsi.so 00:24:57.208 CC lib/nvmf/mdns_server.o 00:24:57.465 CC lib/ftl/utils/ftl_md.o 00:24:57.465 CC lib/nvmf/rdma.o 00:24:57.722 CC lib/nvmf/auth.o 00:24:57.722 LIB libspdk_vhost.a 00:24:57.722 CC lib/ftl/utils/ftl_mempool.o 00:24:57.722 SO libspdk_vhost.so.8.0 00:24:57.979 CC lib/ftl/utils/ftl_bitmap.o 00:24:57.979 CC lib/ftl/utils/ftl_property.o 00:24:57.979 SYMLINK libspdk_vhost.so 00:24:57.979 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:24:57.979 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:24:57.979 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:24:58.237 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:24:58.237 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:24:58.237 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:24:58.494 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:24:58.494 CC lib/ftl/upgrade/ftl_sb_v3.o 00:24:58.495 CC lib/ftl/upgrade/ftl_sb_v5.o 00:24:58.495 CC lib/ftl/nvc/ftl_nvc_dev.o 00:24:58.495 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:24:58.495 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:24:58.495 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:24:58.752 CC lib/ftl/base/ftl_base_dev.o 00:24:58.752 CC lib/ftl/base/ftl_base_bdev.o 00:24:58.752 CC lib/ftl/ftl_trace.o 00:24:59.010 LIB libspdk_ftl.a 00:24:59.268 SO libspdk_ftl.so.9.0 00:24:59.526 SYMLINK libspdk_ftl.so 00:25:00.091 LIB libspdk_nvmf.a 00:25:00.438 SO libspdk_nvmf.so.19.0 00:25:00.438 SYMLINK libspdk_nvmf.so 00:25:01.006 CC module/env_dpdk/env_dpdk_rpc.o 00:25:01.006 CC module/accel/dsa/accel_dsa.o 00:25:01.006 CC module/blob/bdev/blob_bdev.o 00:25:01.006 CC module/scheduler/dynamic/scheduler_dynamic.o 00:25:01.006 CC module/sock/posix/posix.o 00:25:01.006 CC module/accel/error/accel_error.o 00:25:01.006 CC module/fsdev/aio/fsdev_aio.o 00:25:01.006 CC module/accel/ioat/accel_ioat.o 00:25:01.006 CC module/accel/iaa/accel_iaa.o 00:25:01.006 CC module/keyring/file/keyring.o 00:25:01.006 LIB libspdk_env_dpdk_rpc.a 00:25:01.006 SO libspdk_env_dpdk_rpc.so.6.0 00:25:01.006 SYMLINK libspdk_env_dpdk_rpc.so 00:25:01.264 CC module/keyring/file/keyring_rpc.o 00:25:01.264 LIB libspdk_scheduler_dynamic.a 00:25:01.264 CC module/accel/dsa/accel_dsa_rpc.o 00:25:01.264 SO libspdk_scheduler_dynamic.so.4.0 00:25:01.264 CC module/accel/error/accel_error_rpc.o 00:25:01.264 CC module/keyring/linux/keyring.o 00:25:01.264 CC module/accel/ioat/accel_ioat_rpc.o 00:25:01.264 CC module/accel/iaa/accel_iaa_rpc.o 00:25:01.264 SYMLINK libspdk_scheduler_dynamic.so 00:25:01.522 LIB libspdk_accel_dsa.a 00:25:01.522 LIB libspdk_keyring_file.a 00:25:01.522 LIB libspdk_blob_bdev.a 00:25:01.522 SO libspdk_accel_dsa.so.5.0 00:25:01.522 LIB libspdk_accel_error.a 00:25:01.522 SO libspdk_keyring_file.so.2.0 00:25:01.522 SO libspdk_blob_bdev.so.11.0 00:25:01.522 SO libspdk_accel_error.so.2.0 00:25:01.522 LIB libspdk_accel_ioat.a 00:25:01.522 SO libspdk_accel_ioat.so.6.0 00:25:01.522 SYMLINK libspdk_accel_dsa.so 00:25:01.522 SYMLINK libspdk_keyring_file.so 00:25:01.522 CC module/fsdev/aio/fsdev_aio_rpc.o 00:25:01.522 SYMLINK libspdk_blob_bdev.so 00:25:01.522 CC module/keyring/linux/keyring_rpc.o 00:25:01.522 CC module/fsdev/aio/linux_aio_mgr.o 00:25:01.522 SYMLINK libspdk_accel_error.so 00:25:01.522 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:25:01.522 SYMLINK libspdk_accel_ioat.so 00:25:01.522 LIB libspdk_accel_iaa.a 00:25:01.780 SO libspdk_accel_iaa.so.3.0 00:25:01.780 LIB libspdk_keyring_linux.a 00:25:01.780 SYMLINK libspdk_accel_iaa.so 00:25:01.780 SO libspdk_keyring_linux.so.1.0 00:25:01.780 CC module/scheduler/gscheduler/gscheduler.o 00:25:01.780 SYMLINK libspdk_keyring_linux.so 00:25:01.780 LIB libspdk_scheduler_dpdk_governor.a 00:25:02.038 LIB libspdk_fsdev_aio.a 00:25:02.038 SO libspdk_scheduler_dpdk_governor.so.4.0 00:25:02.039 SO libspdk_fsdev_aio.so.1.0 00:25:02.039 CC module/bdev/delay/vbdev_delay.o 00:25:02.039 CC module/bdev/gpt/gpt.o 00:25:02.039 LIB libspdk_sock_posix.a 00:25:02.039 CC module/blobfs/bdev/blobfs_bdev.o 00:25:02.039 LIB libspdk_scheduler_gscheduler.a 00:25:02.039 CC module/bdev/error/vbdev_error.o 00:25:02.039 SYMLINK libspdk_scheduler_dpdk_governor.so 00:25:02.039 CC module/bdev/error/vbdev_error_rpc.o 00:25:02.039 SO libspdk_sock_posix.so.6.0 00:25:02.039 SYMLINK libspdk_fsdev_aio.so 00:25:02.039 CC module/bdev/lvol/vbdev_lvol.o 00:25:02.039 SO libspdk_scheduler_gscheduler.so.4.0 00:25:02.039 CC module/bdev/malloc/bdev_malloc.o 00:25:02.039 SYMLINK libspdk_sock_posix.so 00:25:02.039 CC module/bdev/delay/vbdev_delay_rpc.o 00:25:02.039 SYMLINK libspdk_scheduler_gscheduler.so 00:25:02.295 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:25:02.295 CC module/bdev/gpt/vbdev_gpt.o 00:25:02.295 CC module/bdev/null/bdev_null.o 00:25:02.295 LIB libspdk_bdev_error.a 00:25:02.295 CC module/bdev/nvme/bdev_nvme.o 00:25:02.295 SO libspdk_bdev_error.so.6.0 00:25:02.295 LIB libspdk_blobfs_bdev.a 00:25:02.552 SO libspdk_blobfs_bdev.so.6.0 00:25:02.552 SYMLINK libspdk_bdev_error.so 00:25:02.552 CC module/bdev/nvme/bdev_nvme_rpc.o 00:25:02.552 CC module/bdev/passthru/vbdev_passthru.o 00:25:02.552 CC module/bdev/raid/bdev_raid.o 00:25:02.552 SYMLINK libspdk_blobfs_bdev.so 00:25:02.552 LIB libspdk_bdev_gpt.a 00:25:02.552 CC module/bdev/raid/bdev_raid_rpc.o 00:25:02.552 CC module/bdev/malloc/bdev_malloc_rpc.o 00:25:02.552 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:25:02.552 SO libspdk_bdev_gpt.so.6.0 00:25:02.552 CC module/bdev/null/bdev_null_rpc.o 00:25:02.552 LIB libspdk_bdev_delay.a 00:25:02.552 SYMLINK libspdk_bdev_gpt.so 00:25:02.552 CC module/bdev/raid/bdev_raid_sb.o 00:25:02.810 SO libspdk_bdev_delay.so.6.0 00:25:02.810 LIB libspdk_bdev_malloc.a 00:25:02.810 SYMLINK libspdk_bdev_delay.so 00:25:02.810 CC module/bdev/raid/raid0.o 00:25:02.810 LIB libspdk_bdev_null.a 00:25:02.810 CC module/bdev/raid/raid1.o 00:25:02.810 SO libspdk_bdev_malloc.so.6.0 00:25:02.810 SO libspdk_bdev_null.so.6.0 00:25:02.810 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:25:02.810 SYMLINK libspdk_bdev_malloc.so 00:25:02.810 SYMLINK libspdk_bdev_null.so 00:25:02.810 CC module/bdev/nvme/nvme_rpc.o 00:25:03.068 CC module/bdev/raid/concat.o 00:25:03.068 CC module/bdev/split/vbdev_split.o 00:25:03.068 CC module/bdev/nvme/bdev_mdns_client.o 00:25:03.068 LIB libspdk_bdev_passthru.a 00:25:03.068 SO libspdk_bdev_passthru.so.6.0 00:25:03.068 CC module/bdev/nvme/vbdev_opal.o 00:25:03.326 SYMLINK libspdk_bdev_passthru.so 00:25:03.326 CC module/bdev/nvme/vbdev_opal_rpc.o 00:25:03.326 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:25:03.326 LIB libspdk_bdev_lvol.a 00:25:03.326 SO libspdk_bdev_lvol.so.6.0 00:25:03.326 CC module/bdev/split/vbdev_split_rpc.o 00:25:03.326 SYMLINK libspdk_bdev_lvol.so 00:25:03.585 CC module/bdev/zone_block/vbdev_zone_block.o 00:25:03.585 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:25:03.585 CC module/bdev/ftl/bdev_ftl.o 00:25:03.585 CC module/bdev/ftl/bdev_ftl_rpc.o 00:25:03.585 CC module/bdev/aio/bdev_aio.o 00:25:03.585 LIB libspdk_bdev_split.a 00:25:03.585 CC module/bdev/iscsi/bdev_iscsi.o 00:25:03.585 SO libspdk_bdev_split.so.6.0 00:25:03.585 CC module/bdev/virtio/bdev_virtio_scsi.o 00:25:03.844 SYMLINK libspdk_bdev_split.so 00:25:03.844 CC module/bdev/virtio/bdev_virtio_blk.o 00:25:03.844 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:25:03.844 CC module/bdev/aio/bdev_aio_rpc.o 00:25:04.118 CC module/bdev/virtio/bdev_virtio_rpc.o 00:25:04.118 LIB libspdk_bdev_ftl.a 00:25:04.118 SO libspdk_bdev_ftl.so.6.0 00:25:04.118 LIB libspdk_bdev_iscsi.a 00:25:04.118 LIB libspdk_bdev_zone_block.a 00:25:04.118 LIB libspdk_bdev_aio.a 00:25:04.118 SO libspdk_bdev_iscsi.so.6.0 00:25:04.118 SO libspdk_bdev_zone_block.so.6.0 00:25:04.118 SO libspdk_bdev_aio.so.6.0 00:25:04.118 SYMLINK libspdk_bdev_ftl.so 00:25:04.377 LIB libspdk_bdev_raid.a 00:25:04.377 LIB libspdk_bdev_virtio.a 00:25:04.377 SYMLINK libspdk_bdev_iscsi.so 00:25:04.377 SYMLINK libspdk_bdev_aio.so 00:25:04.377 SYMLINK libspdk_bdev_zone_block.so 00:25:04.377 SO libspdk_bdev_virtio.so.6.0 00:25:04.377 SO libspdk_bdev_raid.so.6.0 00:25:04.377 SYMLINK libspdk_bdev_virtio.so 00:25:04.377 SYMLINK libspdk_bdev_raid.so 00:25:05.752 LIB libspdk_bdev_nvme.a 00:25:05.752 SO libspdk_bdev_nvme.so.7.0 00:25:05.752 SYMLINK libspdk_bdev_nvme.so 00:25:06.321 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:25:06.321 CC module/event/subsystems/sock/sock.o 00:25:06.321 CC module/event/subsystems/keyring/keyring.o 00:25:06.321 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:25:06.321 CC module/event/subsystems/iobuf/iobuf.o 00:25:06.321 CC module/event/subsystems/vmd/vmd.o 00:25:06.321 CC module/event/subsystems/vmd/vmd_rpc.o 00:25:06.321 CC module/event/subsystems/scheduler/scheduler.o 00:25:06.321 CC module/event/subsystems/fsdev/fsdev.o 00:25:06.321 LIB libspdk_event_sock.a 00:25:06.321 LIB libspdk_event_keyring.a 00:25:06.580 SO libspdk_event_sock.so.5.0 00:25:06.580 SO libspdk_event_keyring.so.1.0 00:25:06.580 LIB libspdk_event_iobuf.a 00:25:06.580 LIB libspdk_event_vmd.a 00:25:06.580 LIB libspdk_event_vhost_blk.a 00:25:06.580 LIB libspdk_event_fsdev.a 00:25:06.580 SO libspdk_event_iobuf.so.3.0 00:25:06.580 LIB libspdk_event_scheduler.a 00:25:06.580 SO libspdk_event_vmd.so.6.0 00:25:06.580 SYMLINK libspdk_event_sock.so 00:25:06.580 SO libspdk_event_fsdev.so.1.0 00:25:06.580 SO libspdk_event_vhost_blk.so.3.0 00:25:06.580 SO libspdk_event_scheduler.so.4.0 00:25:06.580 SYMLINK libspdk_event_keyring.so 00:25:06.580 SYMLINK libspdk_event_iobuf.so 00:25:06.580 SYMLINK libspdk_event_fsdev.so 00:25:06.580 SYMLINK libspdk_event_vhost_blk.so 00:25:06.580 SYMLINK libspdk_event_vmd.so 00:25:06.580 SYMLINK libspdk_event_scheduler.so 00:25:06.838 CC module/event/subsystems/accel/accel.o 00:25:07.096 LIB libspdk_event_accel.a 00:25:07.096 SO libspdk_event_accel.so.6.0 00:25:07.096 SYMLINK libspdk_event_accel.so 00:25:07.357 CC module/event/subsystems/bdev/bdev.o 00:25:07.615 LIB libspdk_event_bdev.a 00:25:07.615 SO libspdk_event_bdev.so.6.0 00:25:07.615 SYMLINK libspdk_event_bdev.so 00:25:07.874 CC module/event/subsystems/nbd/nbd.o 00:25:07.874 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:25:07.874 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:25:07.874 CC module/event/subsystems/scsi/scsi.o 00:25:07.874 CC module/event/subsystems/ublk/ublk.o 00:25:08.132 LIB libspdk_event_nbd.a 00:25:08.132 LIB libspdk_event_ublk.a 00:25:08.132 LIB libspdk_event_scsi.a 00:25:08.132 SO libspdk_event_nbd.so.6.0 00:25:08.132 SO libspdk_event_ublk.so.3.0 00:25:08.132 SO libspdk_event_scsi.so.6.0 00:25:08.132 SYMLINK libspdk_event_scsi.so 00:25:08.132 SYMLINK libspdk_event_nbd.so 00:25:08.132 LIB libspdk_event_nvmf.a 00:25:08.132 SYMLINK libspdk_event_ublk.so 00:25:08.132 SO libspdk_event_nvmf.so.6.0 00:25:08.390 SYMLINK libspdk_event_nvmf.so 00:25:08.390 CC module/event/subsystems/iscsi/iscsi.o 00:25:08.390 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:25:08.649 LIB libspdk_event_vhost_scsi.a 00:25:08.649 SO libspdk_event_vhost_scsi.so.3.0 00:25:08.649 LIB libspdk_event_iscsi.a 00:25:08.649 SO libspdk_event_iscsi.so.6.0 00:25:08.649 SYMLINK libspdk_event_vhost_scsi.so 00:25:08.649 SYMLINK libspdk_event_iscsi.so 00:25:08.907 SO libspdk.so.6.0 00:25:08.907 SYMLINK libspdk.so 00:25:09.166 CC test/rpc_client/rpc_client_test.o 00:25:09.166 TEST_HEADER include/spdk/accel.h 00:25:09.166 TEST_HEADER include/spdk/accel_module.h 00:25:09.166 TEST_HEADER include/spdk/assert.h 00:25:09.166 CXX app/trace/trace.o 00:25:09.166 TEST_HEADER include/spdk/barrier.h 00:25:09.166 TEST_HEADER include/spdk/base64.h 00:25:09.166 TEST_HEADER include/spdk/bdev.h 00:25:09.166 TEST_HEADER include/spdk/bdev_module.h 00:25:09.166 TEST_HEADER include/spdk/bdev_zone.h 00:25:09.166 TEST_HEADER include/spdk/bit_array.h 00:25:09.166 TEST_HEADER include/spdk/bit_pool.h 00:25:09.166 TEST_HEADER include/spdk/blob_bdev.h 00:25:09.166 TEST_HEADER include/spdk/blobfs_bdev.h 00:25:09.166 TEST_HEADER include/spdk/blobfs.h 00:25:09.166 TEST_HEADER include/spdk/blob.h 00:25:09.166 TEST_HEADER include/spdk/conf.h 00:25:09.166 TEST_HEADER include/spdk/config.h 00:25:09.166 TEST_HEADER include/spdk/cpuset.h 00:25:09.166 TEST_HEADER include/spdk/crc16.h 00:25:09.166 TEST_HEADER include/spdk/crc32.h 00:25:09.166 TEST_HEADER include/spdk/crc64.h 00:25:09.166 TEST_HEADER include/spdk/dif.h 00:25:09.166 TEST_HEADER include/spdk/dma.h 00:25:09.166 TEST_HEADER include/spdk/endian.h 00:25:09.166 TEST_HEADER include/spdk/env_dpdk.h 00:25:09.166 TEST_HEADER include/spdk/env.h 00:25:09.166 TEST_HEADER include/spdk/event.h 00:25:09.166 TEST_HEADER include/spdk/fd_group.h 00:25:09.166 TEST_HEADER include/spdk/fd.h 00:25:09.166 TEST_HEADER include/spdk/file.h 00:25:09.166 TEST_HEADER include/spdk/fsdev.h 00:25:09.166 TEST_HEADER include/spdk/fsdev_module.h 00:25:09.166 TEST_HEADER include/spdk/ftl.h 00:25:09.166 TEST_HEADER include/spdk/fuse_dispatcher.h 00:25:09.166 CC test/thread/poller_perf/poller_perf.o 00:25:09.166 TEST_HEADER include/spdk/gpt_spec.h 00:25:09.166 TEST_HEADER include/spdk/hexlify.h 00:25:09.166 TEST_HEADER include/spdk/histogram_data.h 00:25:09.166 TEST_HEADER include/spdk/idxd.h 00:25:09.166 TEST_HEADER include/spdk/idxd_spec.h 00:25:09.166 CC examples/ioat/perf/perf.o 00:25:09.166 TEST_HEADER include/spdk/init.h 00:25:09.166 TEST_HEADER include/spdk/ioat.h 00:25:09.166 CC test/dma/test_dma/test_dma.o 00:25:09.166 CC examples/util/zipf/zipf.o 00:25:09.166 TEST_HEADER include/spdk/ioat_spec.h 00:25:09.166 TEST_HEADER include/spdk/iscsi_spec.h 00:25:09.166 TEST_HEADER include/spdk/json.h 00:25:09.166 TEST_HEADER include/spdk/jsonrpc.h 00:25:09.166 TEST_HEADER include/spdk/keyring.h 00:25:09.166 TEST_HEADER include/spdk/keyring_module.h 00:25:09.166 TEST_HEADER include/spdk/likely.h 00:25:09.166 TEST_HEADER include/spdk/log.h 00:25:09.166 TEST_HEADER include/spdk/lvol.h 00:25:09.166 TEST_HEADER include/spdk/md5.h 00:25:09.166 CC test/app/bdev_svc/bdev_svc.o 00:25:09.166 TEST_HEADER include/spdk/memory.h 00:25:09.166 TEST_HEADER include/spdk/mmio.h 00:25:09.166 TEST_HEADER include/spdk/nbd.h 00:25:09.166 TEST_HEADER include/spdk/net.h 00:25:09.166 TEST_HEADER include/spdk/notify.h 00:25:09.166 TEST_HEADER include/spdk/nvme.h 00:25:09.166 TEST_HEADER include/spdk/nvme_intel.h 00:25:09.425 TEST_HEADER include/spdk/nvme_ocssd.h 00:25:09.425 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:25:09.425 TEST_HEADER include/spdk/nvme_spec.h 00:25:09.425 TEST_HEADER include/spdk/nvme_zns.h 00:25:09.425 TEST_HEADER include/spdk/nvmf_cmd.h 00:25:09.425 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:25:09.425 TEST_HEADER include/spdk/nvmf.h 00:25:09.425 TEST_HEADER include/spdk/nvmf_spec.h 00:25:09.425 TEST_HEADER include/spdk/nvmf_transport.h 00:25:09.425 TEST_HEADER include/spdk/opal.h 00:25:09.425 TEST_HEADER include/spdk/opal_spec.h 00:25:09.425 TEST_HEADER include/spdk/pci_ids.h 00:25:09.425 TEST_HEADER include/spdk/pipe.h 00:25:09.425 TEST_HEADER include/spdk/queue.h 00:25:09.425 TEST_HEADER include/spdk/reduce.h 00:25:09.425 TEST_HEADER include/spdk/rpc.h 00:25:09.425 TEST_HEADER include/spdk/scheduler.h 00:25:09.425 TEST_HEADER include/spdk/scsi.h 00:25:09.425 TEST_HEADER include/spdk/scsi_spec.h 00:25:09.425 TEST_HEADER include/spdk/sock.h 00:25:09.425 CC test/env/mem_callbacks/mem_callbacks.o 00:25:09.425 TEST_HEADER include/spdk/stdinc.h 00:25:09.425 TEST_HEADER include/spdk/string.h 00:25:09.425 TEST_HEADER include/spdk/thread.h 00:25:09.425 TEST_HEADER include/spdk/trace.h 00:25:09.425 TEST_HEADER include/spdk/trace_parser.h 00:25:09.425 TEST_HEADER include/spdk/tree.h 00:25:09.425 TEST_HEADER include/spdk/ublk.h 00:25:09.425 TEST_HEADER include/spdk/util.h 00:25:09.425 TEST_HEADER include/spdk/uuid.h 00:25:09.425 TEST_HEADER include/spdk/version.h 00:25:09.425 TEST_HEADER include/spdk/vfio_user_pci.h 00:25:09.425 TEST_HEADER include/spdk/vfio_user_spec.h 00:25:09.425 TEST_HEADER include/spdk/vhost.h 00:25:09.425 TEST_HEADER include/spdk/vmd.h 00:25:09.425 TEST_HEADER include/spdk/xor.h 00:25:09.425 TEST_HEADER include/spdk/zipf.h 00:25:09.425 CXX test/cpp_headers/accel.o 00:25:09.425 LINK poller_perf 00:25:09.425 LINK rpc_client_test 00:25:09.425 LINK ioat_perf 00:25:09.683 LINK zipf 00:25:09.683 LINK bdev_svc 00:25:09.683 CXX test/cpp_headers/accel_module.o 00:25:09.683 LINK spdk_trace 00:25:09.941 CXX test/cpp_headers/assert.o 00:25:09.941 CC test/app/histogram_perf/histogram_perf.o 00:25:09.941 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:25:09.941 CC examples/ioat/verify/verify.o 00:25:09.941 CC examples/interrupt_tgt/interrupt_tgt.o 00:25:09.941 LINK test_dma 00:25:09.941 CC app/trace_record/trace_record.o 00:25:10.199 CC app/nvmf_tgt/nvmf_main.o 00:25:10.199 CXX test/cpp_headers/barrier.o 00:25:10.199 LINK interrupt_tgt 00:25:10.199 LINK mem_callbacks 00:25:10.199 LINK histogram_perf 00:25:10.199 LINK verify 00:25:10.199 LINK spdk_trace_record 00:25:10.458 LINK nvme_fuzz 00:25:10.458 CXX test/cpp_headers/base64.o 00:25:10.458 LINK nvmf_tgt 00:25:10.458 CC test/env/vtophys/vtophys.o 00:25:10.458 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:25:10.458 CC test/env/pci/pci_ut.o 00:25:10.458 CC test/env/memory/memory_ut.o 00:25:10.716 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:25:10.716 CXX test/cpp_headers/bdev.o 00:25:10.716 LINK vtophys 00:25:10.716 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:25:10.716 LINK env_dpdk_post_init 00:25:10.974 CXX test/cpp_headers/bdev_module.o 00:25:10.974 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:25:10.974 CC app/iscsi_tgt/iscsi_tgt.o 00:25:10.974 CC examples/thread/thread/thread_ex.o 00:25:10.974 LINK pci_ut 00:25:10.974 CC test/event/event_perf/event_perf.o 00:25:11.233 CC test/nvme/aer/aer.o 00:25:11.233 LINK iscsi_tgt 00:25:11.233 LINK thread 00:25:11.233 CXX test/cpp_headers/bdev_zone.o 00:25:11.233 CXX test/cpp_headers/bit_array.o 00:25:11.233 LINK event_perf 00:25:11.492 LINK vhost_fuzz 00:25:11.492 LINK aer 00:25:11.492 CC test/event/reactor/reactor.o 00:25:11.750 CXX test/cpp_headers/bit_pool.o 00:25:11.750 CC app/spdk_tgt/spdk_tgt.o 00:25:11.750 CC test/accel/dif/dif.o 00:25:11.750 LINK reactor 00:25:11.750 CXX test/cpp_headers/blob_bdev.o 00:25:11.750 LINK memory_ut 00:25:12.009 CC test/blobfs/mkfs/mkfs.o 00:25:12.009 CC examples/sock/hello_world/hello_sock.o 00:25:12.009 CC test/nvme/reset/reset.o 00:25:12.009 LINK spdk_tgt 00:25:12.266 CC test/event/reactor_perf/reactor_perf.o 00:25:12.266 CXX test/cpp_headers/blobfs_bdev.o 00:25:12.266 LINK mkfs 00:25:12.266 CXX test/cpp_headers/blobfs.o 00:25:12.266 LINK reset 00:25:12.266 LINK hello_sock 00:25:12.525 LINK reactor_perf 00:25:12.525 CXX test/cpp_headers/blob.o 00:25:12.525 CC app/spdk_lspci/spdk_lspci.o 00:25:12.525 CC test/nvme/sgl/sgl.o 00:25:12.783 CXX test/cpp_headers/conf.o 00:25:12.783 CC test/event/app_repeat/app_repeat.o 00:25:12.783 CC test/nvme/e2edp/nvme_dp.o 00:25:12.783 LINK iscsi_fuzz 00:25:12.783 LINK spdk_lspci 00:25:12.783 LINK dif 00:25:12.783 CC app/spdk_nvme_perf/perf.o 00:25:12.783 CC test/lvol/esnap/esnap.o 00:25:12.783 LINK app_repeat 00:25:13.042 CXX test/cpp_headers/config.o 00:25:13.042 CXX test/cpp_headers/cpuset.o 00:25:13.042 CC app/spdk_nvme_identify/identify.o 00:25:13.042 CC test/app/jsoncat/jsoncat.o 00:25:13.042 LINK nvme_dp 00:25:13.042 CXX test/cpp_headers/crc16.o 00:25:13.042 LINK sgl 00:25:13.042 LINK jsoncat 00:25:13.300 CC test/event/scheduler/scheduler.o 00:25:13.300 CC app/spdk_nvme_discover/discovery_aer.o 00:25:13.300 CC test/app/stub/stub.o 00:25:13.300 CXX test/cpp_headers/crc32.o 00:25:13.558 CC test/nvme/overhead/overhead.o 00:25:13.558 LINK scheduler 00:25:13.558 CXX test/cpp_headers/crc64.o 00:25:13.840 CC test/nvme/err_injection/err_injection.o 00:25:13.840 LINK spdk_nvme_discover 00:25:13.840 LINK stub 00:25:13.840 CXX test/cpp_headers/dif.o 00:25:14.098 LINK err_injection 00:25:14.098 CXX test/cpp_headers/dma.o 00:25:14.356 CC app/spdk_top/spdk_top.o 00:25:14.356 LINK overhead 00:25:14.356 LINK spdk_nvme_identify 00:25:14.356 LINK spdk_nvme_perf 00:25:14.356 CXX test/cpp_headers/endian.o 00:25:14.356 CC test/nvme/startup/startup.o 00:25:14.615 CC examples/vmd/lsvmd/lsvmd.o 00:25:14.615 CC test/nvme/reserve/reserve.o 00:25:14.615 CXX test/cpp_headers/env_dpdk.o 00:25:14.615 CC examples/vmd/led/led.o 00:25:14.874 LINK startup 00:25:14.874 CC test/nvme/simple_copy/simple_copy.o 00:25:14.874 LINK lsvmd 00:25:14.874 LINK reserve 00:25:14.874 CC app/vhost/vhost.o 00:25:14.874 LINK led 00:25:14.874 CXX test/cpp_headers/env.o 00:25:15.131 LINK simple_copy 00:25:15.131 CC test/nvme/connect_stress/connect_stress.o 00:25:15.131 CC app/spdk_dd/spdk_dd.o 00:25:15.131 LINK vhost 00:25:15.389 CXX test/cpp_headers/event.o 00:25:15.389 CC app/fio/nvme/fio_plugin.o 00:25:15.389 CC test/nvme/boot_partition/boot_partition.o 00:25:15.389 LINK spdk_top 00:25:15.389 CC test/bdev/bdevio/bdevio.o 00:25:15.649 LINK connect_stress 00:25:15.649 CXX test/cpp_headers/fd_group.o 00:25:15.649 LINK spdk_dd 00:25:15.649 CC examples/idxd/perf/perf.o 00:25:15.649 LINK boot_partition 00:25:15.908 CXX test/cpp_headers/fd.o 00:25:15.908 CC examples/accel/perf/accel_perf.o 00:25:15.908 CC test/nvme/compliance/nvme_compliance.o 00:25:15.908 CC examples/fsdev/hello_world/hello_fsdev.o 00:25:15.908 LINK bdevio 00:25:16.167 CXX test/cpp_headers/file.o 00:25:16.167 CC examples/blob/hello_world/hello_blob.o 00:25:16.167 LINK spdk_nvme 00:25:16.425 LINK idxd_perf 00:25:16.425 LINK hello_fsdev 00:25:16.425 CXX test/cpp_headers/fsdev.o 00:25:16.425 LINK nvme_compliance 00:25:16.425 CC app/fio/bdev/fio_plugin.o 00:25:16.425 LINK accel_perf 00:25:16.683 LINK hello_blob 00:25:16.683 CC examples/blob/cli/blobcli.o 00:25:16.683 CC test/nvme/fused_ordering/fused_ordering.o 00:25:16.683 CXX test/cpp_headers/fsdev_module.o 00:25:16.683 CC test/nvme/doorbell_aers/doorbell_aers.o 00:25:16.683 CXX test/cpp_headers/ftl.o 00:25:16.683 CC examples/nvme/hello_world/hello_world.o 00:25:16.941 CXX test/cpp_headers/fuse_dispatcher.o 00:25:16.941 LINK hello_world 00:25:17.199 LINK fused_ordering 00:25:17.199 LINK doorbell_aers 00:25:17.199 CC examples/nvme/reconnect/reconnect.o 00:25:17.199 LINK blobcli 00:25:17.199 LINK spdk_bdev 00:25:17.199 CXX test/cpp_headers/gpt_spec.o 00:25:17.199 CC test/nvme/fdp/fdp.o 00:25:17.199 CXX test/cpp_headers/hexlify.o 00:25:17.199 CXX test/cpp_headers/histogram_data.o 00:25:17.456 CC examples/nvme/nvme_manage/nvme_manage.o 00:25:17.456 CC test/nvme/cuse/cuse.o 00:25:17.456 CXX test/cpp_headers/idxd.o 00:25:17.456 CXX test/cpp_headers/idxd_spec.o 00:25:17.456 CXX test/cpp_headers/init.o 00:25:17.456 LINK reconnect 00:25:17.714 CC examples/nvme/arbitration/arbitration.o 00:25:17.714 CXX test/cpp_headers/ioat.o 00:25:17.972 LINK fdp 00:25:17.972 CC examples/nvme/hotplug/hotplug.o 00:25:17.972 CC examples/nvme/cmb_copy/cmb_copy.o 00:25:17.972 CXX test/cpp_headers/ioat_spec.o 00:25:17.972 CC examples/nvme/abort/abort.o 00:25:17.972 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:25:17.972 LINK nvme_manage 00:25:17.972 CXX test/cpp_headers/iscsi_spec.o 00:25:18.229 LINK hotplug 00:25:18.229 LINK pmr_persistence 00:25:18.229 CXX test/cpp_headers/json.o 00:25:18.229 LINK arbitration 00:25:18.229 CXX test/cpp_headers/jsonrpc.o 00:25:18.229 LINK cmb_copy 00:25:18.229 CXX test/cpp_headers/keyring.o 00:25:18.229 LINK abort 00:25:18.487 CXX test/cpp_headers/keyring_module.o 00:25:18.487 CXX test/cpp_headers/likely.o 00:25:18.487 CC examples/bdev/hello_world/hello_bdev.o 00:25:18.487 CXX test/cpp_headers/log.o 00:25:18.487 CXX test/cpp_headers/lvol.o 00:25:18.487 CXX test/cpp_headers/md5.o 00:25:18.487 CC examples/bdev/bdevperf/bdevperf.o 00:25:18.487 CXX test/cpp_headers/memory.o 00:25:18.487 CXX test/cpp_headers/mmio.o 00:25:18.488 CXX test/cpp_headers/nbd.o 00:25:18.746 CXX test/cpp_headers/net.o 00:25:18.746 CXX test/cpp_headers/notify.o 00:25:18.746 CXX test/cpp_headers/nvme.o 00:25:18.746 LINK hello_bdev 00:25:18.746 CXX test/cpp_headers/nvme_intel.o 00:25:18.746 CXX test/cpp_headers/nvme_ocssd.o 00:25:18.746 CXX test/cpp_headers/nvme_ocssd_spec.o 00:25:18.746 CXX test/cpp_headers/nvme_spec.o 00:25:19.004 CXX test/cpp_headers/nvme_zns.o 00:25:19.004 CXX test/cpp_headers/nvmf_cmd.o 00:25:19.004 CXX test/cpp_headers/nvmf_fc_spec.o 00:25:19.004 LINK cuse 00:25:19.004 CXX test/cpp_headers/nvmf.o 00:25:19.004 CXX test/cpp_headers/nvmf_spec.o 00:25:19.004 CXX test/cpp_headers/nvmf_transport.o 00:25:19.004 CXX test/cpp_headers/opal.o 00:25:19.263 CXX test/cpp_headers/opal_spec.o 00:25:19.263 CXX test/cpp_headers/pci_ids.o 00:25:19.263 CXX test/cpp_headers/pipe.o 00:25:19.263 CXX test/cpp_headers/queue.o 00:25:19.263 CXX test/cpp_headers/reduce.o 00:25:19.263 CXX test/cpp_headers/rpc.o 00:25:19.263 CXX test/cpp_headers/scheduler.o 00:25:19.263 CXX test/cpp_headers/scsi.o 00:25:19.263 CXX test/cpp_headers/scsi_spec.o 00:25:19.263 CXX test/cpp_headers/sock.o 00:25:19.521 CXX test/cpp_headers/stdinc.o 00:25:19.521 CXX test/cpp_headers/string.o 00:25:19.521 CXX test/cpp_headers/thread.o 00:25:19.521 CXX test/cpp_headers/trace.o 00:25:19.521 CXX test/cpp_headers/trace_parser.o 00:25:19.521 CXX test/cpp_headers/tree.o 00:25:19.521 CXX test/cpp_headers/ublk.o 00:25:19.521 CXX test/cpp_headers/util.o 00:25:19.521 CXX test/cpp_headers/uuid.o 00:25:19.521 CXX test/cpp_headers/version.o 00:25:19.521 CXX test/cpp_headers/vfio_user_pci.o 00:25:19.521 LINK esnap 00:25:19.521 CXX test/cpp_headers/vfio_user_spec.o 00:25:19.521 CXX test/cpp_headers/vhost.o 00:25:19.779 CXX test/cpp_headers/vmd.o 00:25:19.779 CXX test/cpp_headers/xor.o 00:25:19.779 CXX test/cpp_headers/zipf.o 00:25:19.779 LINK bdevperf 00:25:20.387 CC examples/nvmf/nvmf/nvmf.o 00:25:20.954 LINK nvmf 00:25:20.954 00:25:20.954 real 1m46.359s 00:25:20.954 user 10m33.799s 00:25:20.954 sys 1m53.561s 00:25:20.954 20:24:16 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:25:20.954 ************************************ 00:25:20.954 END TEST make 00:25:20.954 20:24:16 make -- common/autotest_common.sh@10 -- $ set +x 00:25:20.954 ************************************ 00:25:20.954 20:24:16 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:25:20.954 20:24:16 -- pm/common@29 -- $ signal_monitor_resources TERM 00:25:20.954 20:24:16 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:25:20.954 20:24:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:20.954 20:24:16 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:25:20.954 20:24:16 -- pm/common@44 -- $ pid=5302 00:25:20.954 20:24:16 -- pm/common@50 -- $ kill -TERM 5302 00:25:20.954 20:24:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:20.954 20:24:16 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:25:20.954 20:24:16 -- pm/common@44 -- $ pid=5304 00:25:20.954 20:24:16 -- pm/common@50 -- $ kill -TERM 5304 00:25:21.212 20:24:16 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:21.212 20:24:16 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:21.212 20:24:16 -- common/autotest_common.sh@1681 -- # lcov --version 00:25:21.212 20:24:16 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:21.212 20:24:16 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:21.212 20:24:16 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:21.212 20:24:16 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:21.212 20:24:16 -- scripts/common.sh@336 -- # IFS=.-: 00:25:21.212 20:24:16 -- scripts/common.sh@336 -- # read -ra ver1 00:25:21.212 20:24:16 -- scripts/common.sh@337 -- # IFS=.-: 00:25:21.212 20:24:16 -- scripts/common.sh@337 -- # read -ra ver2 00:25:21.212 20:24:16 -- scripts/common.sh@338 -- # local 'op=<' 00:25:21.212 20:24:16 -- scripts/common.sh@340 -- # ver1_l=2 00:25:21.212 20:24:16 -- scripts/common.sh@341 -- # ver2_l=1 00:25:21.212 20:24:16 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:21.212 20:24:16 -- scripts/common.sh@344 -- # case "$op" in 00:25:21.212 20:24:16 -- scripts/common.sh@345 -- # : 1 00:25:21.212 20:24:16 -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:21.212 20:24:16 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:21.212 20:24:16 -- scripts/common.sh@365 -- # decimal 1 00:25:21.212 20:24:16 -- scripts/common.sh@353 -- # local d=1 00:25:21.212 20:24:16 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:21.212 20:24:16 -- scripts/common.sh@355 -- # echo 1 00:25:21.212 20:24:16 -- scripts/common.sh@365 -- # ver1[v]=1 00:25:21.212 20:24:16 -- scripts/common.sh@366 -- # decimal 2 00:25:21.212 20:24:16 -- scripts/common.sh@353 -- # local d=2 00:25:21.212 20:24:16 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:21.212 20:24:16 -- scripts/common.sh@355 -- # echo 2 00:25:21.212 20:24:16 -- scripts/common.sh@366 -- # ver2[v]=2 00:25:21.212 20:24:16 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:21.212 20:24:16 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:21.212 20:24:16 -- scripts/common.sh@368 -- # return 0 00:25:21.212 20:24:16 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:21.213 20:24:16 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:21.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.213 --rc genhtml_branch_coverage=1 00:25:21.213 --rc genhtml_function_coverage=1 00:25:21.213 --rc genhtml_legend=1 00:25:21.213 --rc geninfo_all_blocks=1 00:25:21.213 --rc geninfo_unexecuted_blocks=1 00:25:21.213 00:25:21.213 ' 00:25:21.213 20:24:16 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:21.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.213 --rc genhtml_branch_coverage=1 00:25:21.213 --rc genhtml_function_coverage=1 00:25:21.213 --rc genhtml_legend=1 00:25:21.213 --rc geninfo_all_blocks=1 00:25:21.213 --rc geninfo_unexecuted_blocks=1 00:25:21.213 00:25:21.213 ' 00:25:21.213 20:24:16 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:21.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.213 --rc genhtml_branch_coverage=1 00:25:21.213 --rc genhtml_function_coverage=1 00:25:21.213 --rc genhtml_legend=1 00:25:21.213 --rc geninfo_all_blocks=1 00:25:21.213 --rc geninfo_unexecuted_blocks=1 00:25:21.213 00:25:21.213 ' 00:25:21.213 20:24:16 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:21.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.213 --rc genhtml_branch_coverage=1 00:25:21.213 --rc genhtml_function_coverage=1 00:25:21.213 --rc genhtml_legend=1 00:25:21.213 --rc geninfo_all_blocks=1 00:25:21.213 --rc geninfo_unexecuted_blocks=1 00:25:21.213 00:25:21.213 ' 00:25:21.213 20:24:16 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:21.213 20:24:16 -- nvmf/common.sh@7 -- # uname -s 00:25:21.213 20:24:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:21.213 20:24:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:21.213 20:24:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:21.213 20:24:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:21.213 20:24:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:21.213 20:24:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:21.213 20:24:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:21.213 20:24:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:21.213 20:24:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:21.213 20:24:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:21.213 20:24:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:25:21.213 20:24:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:25:21.213 20:24:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:21.213 20:24:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:21.213 20:24:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:21.213 20:24:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:21.213 20:24:16 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:21.213 20:24:16 -- scripts/common.sh@15 -- # shopt -s extglob 00:25:21.213 20:24:16 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:21.213 20:24:16 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:21.213 20:24:16 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:21.213 20:24:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.213 20:24:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.213 20:24:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.213 20:24:16 -- paths/export.sh@5 -- # export PATH 00:25:21.213 20:24:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.213 20:24:16 -- nvmf/common.sh@51 -- # : 0 00:25:21.213 20:24:16 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:21.213 20:24:16 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:21.213 20:24:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:21.213 20:24:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:21.213 20:24:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:21.213 20:24:16 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:21.213 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:21.213 20:24:16 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:21.213 20:24:16 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:21.213 20:24:16 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:21.213 20:24:16 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:25:21.213 20:24:16 -- spdk/autotest.sh@32 -- # uname -s 00:25:21.213 20:24:16 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:25:21.213 20:24:16 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:25:21.213 20:24:16 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:25:21.213 20:24:16 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:25:21.213 20:24:16 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:25:21.213 20:24:16 -- spdk/autotest.sh@44 -- # modprobe nbd 00:25:21.213 20:24:16 -- spdk/autotest.sh@46 -- # type -P udevadm 00:25:21.213 20:24:16 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:25:21.213 20:24:16 -- spdk/autotest.sh@48 -- # udevadm_pid=56240 00:25:21.213 20:24:16 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:25:21.213 20:24:16 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:25:21.213 20:24:16 -- pm/common@17 -- # local monitor 00:25:21.213 20:24:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:25:21.213 20:24:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:25:21.213 20:24:16 -- pm/common@25 -- # sleep 1 00:25:21.213 20:24:16 -- pm/common@21 -- # date +%s 00:25:21.213 20:24:16 -- pm/common@21 -- # date +%s 00:25:21.213 20:24:16 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727814256 00:25:21.213 20:24:16 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727814256 00:25:21.472 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727814256_collect-cpu-load.pm.log 00:25:21.472 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727814256_collect-vmstat.pm.log 00:25:22.409 20:24:17 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:25:22.409 20:24:17 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:25:22.409 20:24:17 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:22.409 20:24:17 -- common/autotest_common.sh@10 -- # set +x 00:25:22.409 20:24:17 -- spdk/autotest.sh@59 -- # create_test_list 00:25:22.409 20:24:17 -- common/autotest_common.sh@748 -- # xtrace_disable 00:25:22.409 20:24:17 -- common/autotest_common.sh@10 -- # set +x 00:25:22.409 20:24:17 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:25:22.409 20:24:17 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:25:22.409 20:24:17 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:25:22.409 20:24:17 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:25:22.409 20:24:17 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:25:22.409 20:24:17 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:25:22.409 20:24:17 -- common/autotest_common.sh@1455 -- # uname 00:25:22.409 20:24:17 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:25:22.409 20:24:17 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:25:22.409 20:24:17 -- common/autotest_common.sh@1475 -- # uname 00:25:22.409 20:24:17 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:25:22.409 20:24:17 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:25:22.409 20:24:17 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:25:22.409 lcov: LCOV version 1.15 00:25:22.409 20:24:17 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:25:40.492 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:25:40.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:25:58.572 20:24:53 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:25:58.572 20:24:53 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:58.572 20:24:53 -- common/autotest_common.sh@10 -- # set +x 00:25:58.572 20:24:53 -- spdk/autotest.sh@78 -- # rm -f 00:25:58.572 20:24:53 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:59.138 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:59.139 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:25:59.397 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:25:59.397 20:24:54 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:25:59.397 20:24:54 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:25:59.397 20:24:54 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:25:59.398 20:24:54 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:25:59.398 20:24:54 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:25:59.398 20:24:54 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:25:59.398 20:24:54 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:25:59.398 20:24:54 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:59.398 20:24:54 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:59.398 20:24:54 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:25:59.398 20:24:54 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:25:59.398 20:24:54 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:25:59.398 20:24:54 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:25:59.398 20:24:54 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:59.398 20:24:54 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:25:59.398 20:24:54 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:25:59.398 20:24:54 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:25:59.398 20:24:54 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:25:59.398 20:24:54 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:59.398 20:24:54 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:25:59.398 20:24:54 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:25:59.398 20:24:54 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:25:59.398 20:24:54 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:25:59.398 20:24:54 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:59.398 20:24:54 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:25:59.398 20:24:54 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:25:59.398 20:24:54 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:25:59.398 20:24:54 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:25:59.398 20:24:54 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:25:59.398 20:24:54 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:25:59.398 No valid GPT data, bailing 00:25:59.398 20:24:54 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:59.398 20:24:54 -- scripts/common.sh@394 -- # pt= 00:25:59.398 20:24:54 -- scripts/common.sh@395 -- # return 1 00:25:59.398 20:24:54 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:25:59.398 1+0 records in 00:25:59.398 1+0 records out 00:25:59.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00503019 s, 208 MB/s 00:25:59.398 20:24:54 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:25:59.398 20:24:54 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:25:59.398 20:24:54 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:25:59.398 20:24:54 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:25:59.398 20:24:54 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:25:59.398 No valid GPT data, bailing 00:25:59.398 20:24:54 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:25:59.398 20:24:54 -- scripts/common.sh@394 -- # pt= 00:25:59.398 20:24:54 -- scripts/common.sh@395 -- # return 1 00:25:59.398 20:24:54 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:25:59.398 1+0 records in 00:25:59.398 1+0 records out 00:25:59.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00351684 s, 298 MB/s 00:25:59.398 20:24:54 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:25:59.398 20:24:54 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:25:59.398 20:24:54 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:25:59.398 20:24:54 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:25:59.398 20:24:54 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:25:59.398 No valid GPT data, bailing 00:25:59.398 20:24:54 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:25:59.398 20:24:54 -- scripts/common.sh@394 -- # pt= 00:25:59.398 20:24:54 -- scripts/common.sh@395 -- # return 1 00:25:59.398 20:24:54 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:25:59.657 1+0 records in 00:25:59.657 1+0 records out 00:25:59.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00392496 s, 267 MB/s 00:25:59.657 20:24:54 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:25:59.657 20:24:54 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:25:59.657 20:24:54 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:25:59.657 20:24:54 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:25:59.657 20:24:54 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:25:59.657 No valid GPT data, bailing 00:25:59.657 20:24:54 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:25:59.657 20:24:54 -- scripts/common.sh@394 -- # pt= 00:25:59.657 20:24:54 -- scripts/common.sh@395 -- # return 1 00:25:59.657 20:24:54 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:25:59.657 1+0 records in 00:25:59.657 1+0 records out 00:25:59.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00336484 s, 312 MB/s 00:25:59.657 20:24:54 -- spdk/autotest.sh@105 -- # sync 00:25:59.657 20:24:54 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:25:59.657 20:24:54 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:25:59.657 20:24:54 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:26:01.559 20:24:56 -- spdk/autotest.sh@111 -- # uname -s 00:26:01.559 20:24:56 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:26:01.559 20:24:56 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:26:01.559 20:24:56 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:26:02.125 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:02.125 Hugepages 00:26:02.125 node hugesize free / total 00:26:02.125 node0 1048576kB 0 / 0 00:26:02.125 node0 2048kB 0 / 0 00:26:02.125 00:26:02.125 Type BDF Vendor Device NUMA Driver Device Block devices 00:26:02.125 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:26:02.384 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:26:02.384 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:26:02.384 20:24:57 -- spdk/autotest.sh@117 -- # uname -s 00:26:02.384 20:24:57 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:26:02.384 20:24:57 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:26:02.384 20:24:57 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:02.950 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:02.950 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:26:03.209 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:26:03.209 20:24:58 -- common/autotest_common.sh@1515 -- # sleep 1 00:26:04.146 20:24:59 -- common/autotest_common.sh@1516 -- # bdfs=() 00:26:04.146 20:24:59 -- common/autotest_common.sh@1516 -- # local bdfs 00:26:04.146 20:24:59 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:26:04.146 20:24:59 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:26:04.146 20:24:59 -- common/autotest_common.sh@1496 -- # bdfs=() 00:26:04.146 20:24:59 -- common/autotest_common.sh@1496 -- # local bdfs 00:26:04.146 20:24:59 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:04.146 20:24:59 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:26:04.146 20:24:59 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:26:04.146 20:24:59 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:26:04.146 20:24:59 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:26:04.146 20:24:59 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:04.406 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:04.406 Waiting for block devices as requested 00:26:04.664 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:04.664 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:04.664 20:24:59 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:26:04.664 20:24:59 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:26:04.664 20:24:59 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:26:04.664 20:24:59 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:26:04.664 20:24:59 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:26:04.664 20:24:59 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:26:04.664 20:24:59 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:26:04.664 20:24:59 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:26:04.664 20:24:59 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:26:04.664 20:24:59 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:26:04.664 20:24:59 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:26:04.664 20:24:59 -- common/autotest_common.sh@1529 -- # grep oacs 00:26:04.664 20:24:59 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:26:04.664 20:24:59 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:26:04.664 20:24:59 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:26:04.664 20:24:59 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:26:04.664 20:24:59 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:26:04.664 20:24:59 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:26:04.664 20:24:59 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:26:04.665 20:24:59 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:26:04.665 20:24:59 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:26:04.665 20:24:59 -- common/autotest_common.sh@1541 -- # continue 00:26:04.665 20:24:59 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:26:04.665 20:24:59 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:26:04.665 20:24:59 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:26:04.665 20:24:59 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:26:04.922 20:24:59 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:26:04.922 20:24:59 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:26:04.922 20:24:59 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:26:04.922 20:24:59 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:26:04.922 20:24:59 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:26:04.922 20:24:59 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:26:04.922 20:24:59 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:26:04.922 20:24:59 -- common/autotest_common.sh@1529 -- # grep oacs 00:26:04.922 20:24:59 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:26:04.922 20:24:59 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:26:04.922 20:24:59 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:26:04.922 20:24:59 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:26:04.922 20:24:59 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:26:04.922 20:24:59 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:26:04.922 20:24:59 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:26:04.922 20:24:59 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:26:04.922 20:24:59 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:26:04.922 20:24:59 -- common/autotest_common.sh@1541 -- # continue 00:26:04.922 20:24:59 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:26:04.922 20:24:59 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:04.922 20:24:59 -- common/autotest_common.sh@10 -- # set +x 00:26:04.922 20:24:59 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:26:04.922 20:24:59 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:04.922 20:24:59 -- common/autotest_common.sh@10 -- # set +x 00:26:04.922 20:24:59 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:05.489 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:05.489 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:26:05.747 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:26:05.747 20:25:00 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:26:05.747 20:25:00 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:05.747 20:25:00 -- common/autotest_common.sh@10 -- # set +x 00:26:05.747 20:25:00 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:26:05.747 20:25:00 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:26:05.747 20:25:00 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:26:05.747 20:25:00 -- common/autotest_common.sh@1561 -- # bdfs=() 00:26:05.747 20:25:00 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:26:05.747 20:25:00 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:26:05.747 20:25:00 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:26:05.747 20:25:00 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:26:05.747 20:25:00 -- common/autotest_common.sh@1496 -- # bdfs=() 00:26:05.747 20:25:00 -- common/autotest_common.sh@1496 -- # local bdfs 00:26:05.747 20:25:00 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:05.747 20:25:00 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:26:05.747 20:25:00 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:26:05.747 20:25:00 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:26:05.747 20:25:00 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:26:05.747 20:25:00 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:26:05.747 20:25:00 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:26:05.748 20:25:00 -- common/autotest_common.sh@1564 -- # device=0x0010 00:26:05.748 20:25:00 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:26:05.748 20:25:00 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:26:05.748 20:25:00 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:26:05.748 20:25:00 -- common/autotest_common.sh@1564 -- # device=0x0010 00:26:05.748 20:25:00 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:26:05.748 20:25:00 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:26:05.748 20:25:00 -- common/autotest_common.sh@1570 -- # return 0 00:26:05.748 20:25:00 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:26:05.748 20:25:00 -- common/autotest_common.sh@1578 -- # return 0 00:26:05.748 20:25:00 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:26:05.748 20:25:00 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:26:05.748 20:25:00 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:26:05.748 20:25:00 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:26:05.748 20:25:00 -- spdk/autotest.sh@149 -- # timing_enter lib 00:26:05.748 20:25:00 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:05.748 20:25:00 -- common/autotest_common.sh@10 -- # set +x 00:26:05.748 20:25:00 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:26:05.748 20:25:00 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:26:05.748 20:25:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:05.748 20:25:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:05.748 20:25:00 -- common/autotest_common.sh@10 -- # set +x 00:26:05.748 ************************************ 00:26:05.748 START TEST env 00:26:05.748 ************************************ 00:26:05.748 20:25:00 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:26:06.006 * Looking for test storage... 00:26:06.006 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:26:06.006 20:25:01 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:06.006 20:25:01 env -- common/autotest_common.sh@1681 -- # lcov --version 00:26:06.006 20:25:01 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:06.006 20:25:01 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:06.006 20:25:01 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:06.006 20:25:01 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:06.006 20:25:01 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:06.006 20:25:01 env -- scripts/common.sh@336 -- # IFS=.-: 00:26:06.006 20:25:01 env -- scripts/common.sh@336 -- # read -ra ver1 00:26:06.006 20:25:01 env -- scripts/common.sh@337 -- # IFS=.-: 00:26:06.006 20:25:01 env -- scripts/common.sh@337 -- # read -ra ver2 00:26:06.006 20:25:01 env -- scripts/common.sh@338 -- # local 'op=<' 00:26:06.006 20:25:01 env -- scripts/common.sh@340 -- # ver1_l=2 00:26:06.006 20:25:01 env -- scripts/common.sh@341 -- # ver2_l=1 00:26:06.006 20:25:01 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:06.006 20:25:01 env -- scripts/common.sh@344 -- # case "$op" in 00:26:06.006 20:25:01 env -- scripts/common.sh@345 -- # : 1 00:26:06.006 20:25:01 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:06.006 20:25:01 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:06.006 20:25:01 env -- scripts/common.sh@365 -- # decimal 1 00:26:06.006 20:25:01 env -- scripts/common.sh@353 -- # local d=1 00:26:06.006 20:25:01 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:06.006 20:25:01 env -- scripts/common.sh@355 -- # echo 1 00:26:06.006 20:25:01 env -- scripts/common.sh@365 -- # ver1[v]=1 00:26:06.006 20:25:01 env -- scripts/common.sh@366 -- # decimal 2 00:26:06.006 20:25:01 env -- scripts/common.sh@353 -- # local d=2 00:26:06.006 20:25:01 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:06.006 20:25:01 env -- scripts/common.sh@355 -- # echo 2 00:26:06.006 20:25:01 env -- scripts/common.sh@366 -- # ver2[v]=2 00:26:06.006 20:25:01 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:06.006 20:25:01 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:06.006 20:25:01 env -- scripts/common.sh@368 -- # return 0 00:26:06.006 20:25:01 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:06.006 20:25:01 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:06.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.006 --rc genhtml_branch_coverage=1 00:26:06.006 --rc genhtml_function_coverage=1 00:26:06.006 --rc genhtml_legend=1 00:26:06.007 --rc geninfo_all_blocks=1 00:26:06.007 --rc geninfo_unexecuted_blocks=1 00:26:06.007 00:26:06.007 ' 00:26:06.007 20:25:01 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:06.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.007 --rc genhtml_branch_coverage=1 00:26:06.007 --rc genhtml_function_coverage=1 00:26:06.007 --rc genhtml_legend=1 00:26:06.007 --rc geninfo_all_blocks=1 00:26:06.007 --rc geninfo_unexecuted_blocks=1 00:26:06.007 00:26:06.007 ' 00:26:06.007 20:25:01 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:06.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.007 --rc genhtml_branch_coverage=1 00:26:06.007 --rc genhtml_function_coverage=1 00:26:06.007 --rc genhtml_legend=1 00:26:06.007 --rc geninfo_all_blocks=1 00:26:06.007 --rc geninfo_unexecuted_blocks=1 00:26:06.007 00:26:06.007 ' 00:26:06.007 20:25:01 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:06.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.007 --rc genhtml_branch_coverage=1 00:26:06.007 --rc genhtml_function_coverage=1 00:26:06.007 --rc genhtml_legend=1 00:26:06.007 --rc geninfo_all_blocks=1 00:26:06.007 --rc geninfo_unexecuted_blocks=1 00:26:06.007 00:26:06.007 ' 00:26:06.007 20:25:01 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:26:06.007 20:25:01 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:06.007 20:25:01 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:06.007 20:25:01 env -- common/autotest_common.sh@10 -- # set +x 00:26:06.007 ************************************ 00:26:06.007 START TEST env_memory 00:26:06.007 ************************************ 00:26:06.007 20:25:01 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:26:06.007 00:26:06.007 00:26:06.007 CUnit - A unit testing framework for C - Version 2.1-3 00:26:06.007 http://cunit.sourceforge.net/ 00:26:06.007 00:26:06.007 00:26:06.007 Suite: memory 00:26:06.007 Test: alloc and free memory map ...[2024-10-01 20:25:01.170292] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:26:06.007 passed 00:26:06.007 Test: mem map translation ...[2024-10-01 20:25:01.202302] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:26:06.007 [2024-10-01 20:25:01.202588] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:26:06.007 [2024-10-01 20:25:01.202841] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:26:06.007 [2024-10-01 20:25:01.202979] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:26:06.007 passed 00:26:06.265 Test: mem map registration ...[2024-10-01 20:25:01.267701] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:26:06.265 [2024-10-01 20:25:01.268018] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:26:06.265 passed 00:26:06.265 Test: mem map adjacent registrations ...passed 00:26:06.265 00:26:06.265 Run Summary: Type Total Ran Passed Failed Inactive 00:26:06.265 suites 1 1 n/a 0 0 00:26:06.265 tests 4 4 4 0 0 00:26:06.265 asserts 152 152 152 0 n/a 00:26:06.265 00:26:06.265 Elapsed time = 0.215 seconds 00:26:06.265 00:26:06.265 real 0m0.234s 00:26:06.265 ************************************ 00:26:06.265 END TEST env_memory 00:26:06.265 ************************************ 00:26:06.265 user 0m0.217s 00:26:06.265 sys 0m0.011s 00:26:06.265 20:25:01 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:06.265 20:25:01 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:26:06.265 20:25:01 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:26:06.265 20:25:01 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:06.265 20:25:01 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:06.265 20:25:01 env -- common/autotest_common.sh@10 -- # set +x 00:26:06.265 ************************************ 00:26:06.265 START TEST env_vtophys 00:26:06.265 ************************************ 00:26:06.265 20:25:01 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:26:06.265 EAL: lib.eal log level changed from notice to debug 00:26:06.265 EAL: Detected lcore 0 as core 0 on socket 0 00:26:06.265 EAL: Detected lcore 1 as core 0 on socket 0 00:26:06.265 EAL: Detected lcore 2 as core 0 on socket 0 00:26:06.265 EAL: Detected lcore 3 as core 0 on socket 0 00:26:06.265 EAL: Detected lcore 4 as core 0 on socket 0 00:26:06.265 EAL: Detected lcore 5 as core 0 on socket 0 00:26:06.265 EAL: Detected lcore 6 as core 0 on socket 0 00:26:06.265 EAL: Detected lcore 7 as core 0 on socket 0 00:26:06.265 EAL: Detected lcore 8 as core 0 on socket 0 00:26:06.265 EAL: Detected lcore 9 as core 0 on socket 0 00:26:06.265 EAL: Maximum logical cores by configuration: 128 00:26:06.265 EAL: Detected CPU lcores: 10 00:26:06.265 EAL: Detected NUMA nodes: 1 00:26:06.265 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:26:06.265 EAL: Detected shared linkage of DPDK 00:26:06.265 EAL: No shared files mode enabled, IPC will be disabled 00:26:06.265 EAL: Selected IOVA mode 'PA' 00:26:06.265 EAL: Probing VFIO support... 00:26:06.265 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:26:06.265 EAL: VFIO modules not loaded, skipping VFIO support... 00:26:06.265 EAL: Ask a virtual area of 0x2e000 bytes 00:26:06.265 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:26:06.265 EAL: Setting up physically contiguous memory... 00:26:06.265 EAL: Setting maximum number of open files to 524288 00:26:06.265 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:26:06.265 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:26:06.265 EAL: Ask a virtual area of 0x61000 bytes 00:26:06.265 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:26:06.265 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:26:06.265 EAL: Ask a virtual area of 0x400000000 bytes 00:26:06.265 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:26:06.265 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:26:06.265 EAL: Ask a virtual area of 0x61000 bytes 00:26:06.265 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:26:06.265 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:26:06.265 EAL: Ask a virtual area of 0x400000000 bytes 00:26:06.265 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:26:06.265 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:26:06.265 EAL: Ask a virtual area of 0x61000 bytes 00:26:06.265 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:26:06.265 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:26:06.265 EAL: Ask a virtual area of 0x400000000 bytes 00:26:06.265 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:26:06.265 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:26:06.265 EAL: Ask a virtual area of 0x61000 bytes 00:26:06.265 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:26:06.265 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:26:06.265 EAL: Ask a virtual area of 0x400000000 bytes 00:26:06.265 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:26:06.265 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:26:06.265 EAL: Hugepages will be freed exactly as allocated. 00:26:06.265 EAL: No shared files mode enabled, IPC is disabled 00:26:06.265 EAL: No shared files mode enabled, IPC is disabled 00:26:06.523 EAL: TSC frequency is ~2200000 KHz 00:26:06.523 EAL: Main lcore 0 is ready (tid=7fc0e6034a00;cpuset=[0]) 00:26:06.523 EAL: Trying to obtain current memory policy. 00:26:06.523 EAL: Setting policy MPOL_PREFERRED for socket 0 00:26:06.523 EAL: Restoring previous memory policy: 0 00:26:06.523 EAL: request: mp_malloc_sync 00:26:06.523 EAL: No shared files mode enabled, IPC is disabled 00:26:06.523 EAL: Heap on socket 0 was expanded by 2MB 00:26:06.523 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:26:06.523 EAL: No PCI address specified using 'addr=' in: bus=pci 00:26:06.523 EAL: Mem event callback 'spdk:(nil)' registered 00:26:06.523 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:26:06.523 00:26:06.523 00:26:06.523 CUnit - A unit testing framework for C - Version 2.1-3 00:26:06.523 http://cunit.sourceforge.net/ 00:26:06.523 00:26:06.523 00:26:06.523 Suite: components_suite 00:26:06.523 Test: vtophys_malloc_test ...passed 00:26:06.524 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:26:06.524 EAL: Setting policy MPOL_PREFERRED for socket 0 00:26:06.524 EAL: Restoring previous memory policy: 4 00:26:06.524 EAL: Calling mem event callback 'spdk:(nil)' 00:26:06.524 EAL: request: mp_malloc_sync 00:26:06.524 EAL: No shared files mode enabled, IPC is disabled 00:26:06.524 EAL: Heap on socket 0 was expanded by 4MB 00:26:06.524 EAL: Calling mem event callback 'spdk:(nil)' 00:26:06.524 EAL: request: mp_malloc_sync 00:26:06.524 EAL: No shared files mode enabled, IPC is disabled 00:26:06.524 EAL: Heap on socket 0 was shrunk by 4MB 00:26:06.524 EAL: Trying to obtain current memory policy. 00:26:06.524 EAL: Setting policy MPOL_PREFERRED for socket 0 00:26:06.524 EAL: Restoring previous memory policy: 4 00:26:06.524 EAL: Calling mem event callback 'spdk:(nil)' 00:26:06.524 EAL: request: mp_malloc_sync 00:26:06.524 EAL: No shared files mode enabled, IPC is disabled 00:26:06.524 EAL: Heap on socket 0 was expanded by 6MB 00:26:06.524 EAL: Calling mem event callback 'spdk:(nil)' 00:26:06.524 EAL: request: mp_malloc_sync 00:26:06.524 EAL: No shared files mode enabled, IPC is disabled 00:26:06.524 EAL: Heap on socket 0 was shrunk by 6MB 00:26:06.524 EAL: Trying to obtain current memory policy. 00:26:06.524 EAL: Setting policy MPOL_PREFERRED for socket 0 00:26:06.524 EAL: Restoring previous memory policy: 4 00:26:06.524 EAL: Calling mem event callback 'spdk:(nil)' 00:26:06.524 EAL: request: mp_malloc_sync 00:26:06.524 EAL: No shared files mode enabled, IPC is disabled 00:26:06.524 EAL: Heap on socket 0 was expanded by 10MB 00:26:06.524 EAL: Calling mem event callback 'spdk:(nil)' 00:26:06.524 EAL: request: mp_malloc_sync 00:26:06.524 EAL: No shared files mode enabled, IPC is disabled 00:26:06.524 EAL: Heap on socket 0 was shrunk by 10MB 00:26:06.524 EAL: Trying to obtain current memory policy. 00:26:06.524 EAL: Setting policy MPOL_PREFERRED for socket 0 00:26:06.524 EAL: Restoring previous memory policy: 4 00:26:06.524 EAL: Calling mem event callback 'spdk:(nil)' 00:26:06.524 EAL: request: mp_malloc_sync 00:26:06.524 EAL: No shared files mode enabled, IPC is disabled 00:26:06.524 EAL: Heap on socket 0 was expanded by 18MB 00:26:06.524 EAL: Calling mem event callback 'spdk:(nil)' 00:26:06.524 EAL: request: mp_malloc_sync 00:26:06.524 EAL: No shared files mode enabled, IPC is disabled 00:26:06.524 EAL: Heap on socket 0 was shrunk by 18MB 00:26:06.524 EAL: Trying to obtain current memory policy. 00:26:06.524 EAL: Setting policy MPOL_PREFERRED for socket 0 00:26:06.524 EAL: Restoring previous memory policy: 4 00:26:06.524 EAL: Calling mem event callback 'spdk:(nil)' 00:26:06.524 EAL: request: mp_malloc_sync 00:26:06.524 EAL: No shared files mode enabled, IPC is disabled 00:26:06.524 EAL: Heap on socket 0 was expanded by 34MB 00:26:06.524 EAL: Calling mem event callback 'spdk:(nil)' 00:26:06.524 EAL: request: mp_malloc_sync 00:26:06.524 EAL: No shared files mode enabled, IPC is disabled 00:26:06.524 EAL: Heap on socket 0 was shrunk by 34MB 00:26:06.524 EAL: Trying to obtain current memory policy. 00:26:06.524 EAL: Setting policy MPOL_PREFERRED for socket 0 00:26:06.524 EAL: Restoring previous memory policy: 4 00:26:06.524 EAL: Calling mem event callback 'spdk:(nil)' 00:26:06.524 EAL: request: mp_malloc_sync 00:26:06.524 EAL: No shared files mode enabled, IPC is disabled 00:26:06.524 EAL: Heap on socket 0 was expanded by 66MB 00:26:06.524 EAL: Calling mem event callback 'spdk:(nil)' 00:26:06.524 EAL: request: mp_malloc_sync 00:26:06.524 EAL: No shared files mode enabled, IPC is disabled 00:26:06.524 EAL: Heap on socket 0 was shrunk by 66MB 00:26:06.524 EAL: Trying to obtain current memory policy. 00:26:06.524 EAL: Setting policy MPOL_PREFERRED for socket 0 00:26:06.524 EAL: Restoring previous memory policy: 4 00:26:06.524 EAL: Calling mem event callback 'spdk:(nil)' 00:26:06.524 EAL: request: mp_malloc_sync 00:26:06.524 EAL: No shared files mode enabled, IPC is disabled 00:26:06.524 EAL: Heap on socket 0 was expanded by 130MB 00:26:06.524 EAL: Calling mem event callback 'spdk:(nil)' 00:26:06.524 EAL: request: mp_malloc_sync 00:26:06.524 EAL: No shared files mode enabled, IPC is disabled 00:26:06.524 EAL: Heap on socket 0 was shrunk by 130MB 00:26:06.524 EAL: Trying to obtain current memory policy. 00:26:06.524 EAL: Setting policy MPOL_PREFERRED for socket 0 00:26:06.524 EAL: Restoring previous memory policy: 4 00:26:06.524 EAL: Calling mem event callback 'spdk:(nil)' 00:26:06.524 EAL: request: mp_malloc_sync 00:26:06.524 EAL: No shared files mode enabled, IPC is disabled 00:26:06.524 EAL: Heap on socket 0 was expanded by 258MB 00:26:06.524 EAL: Calling mem event callback 'spdk:(nil)' 00:26:06.524 EAL: request: mp_malloc_sync 00:26:06.524 EAL: No shared files mode enabled, IPC is disabled 00:26:06.524 EAL: Heap on socket 0 was shrunk by 258MB 00:26:06.524 EAL: Trying to obtain current memory policy. 00:26:06.524 EAL: Setting policy MPOL_PREFERRED for socket 0 00:26:06.783 EAL: Restoring previous memory policy: 4 00:26:06.783 EAL: Calling mem event callback 'spdk:(nil)' 00:26:06.783 EAL: request: mp_malloc_sync 00:26:06.783 EAL: No shared files mode enabled, IPC is disabled 00:26:06.783 EAL: Heap on socket 0 was expanded by 514MB 00:26:06.783 EAL: Calling mem event callback 'spdk:(nil)' 00:26:06.783 EAL: request: mp_malloc_sync 00:26:06.783 EAL: No shared files mode enabled, IPC is disabled 00:26:06.783 EAL: Heap on socket 0 was shrunk by 514MB 00:26:06.783 EAL: Trying to obtain current memory policy. 00:26:06.783 EAL: Setting policy MPOL_PREFERRED for socket 0 00:26:06.783 EAL: Restoring previous memory policy: 4 00:26:06.783 EAL: Calling mem event callback 'spdk:(nil)' 00:26:06.783 EAL: request: mp_malloc_sync 00:26:06.783 EAL: No shared files mode enabled, IPC is disabled 00:26:06.783 EAL: Heap on socket 0 was expanded by 1026MB 00:26:07.042 EAL: Calling mem event callback 'spdk:(nil)' 00:26:07.042 passed 00:26:07.042 00:26:07.042 Run Summary: Type Total Ran Passed Failed Inactive 00:26:07.042 suites 1 1 n/a 0 0 00:26:07.042 tests 2 2 2 0 0 00:26:07.042 asserts 5379 5379 5379 0 n/a 00:26:07.042 00:26:07.042 Elapsed time = 0.674 seconds 00:26:07.042 EAL: request: mp_malloc_sync 00:26:07.042 EAL: No shared files mode enabled, IPC is disabled 00:26:07.042 EAL: Heap on socket 0 was shrunk by 1026MB 00:26:07.042 EAL: Calling mem event callback 'spdk:(nil)' 00:26:07.042 EAL: request: mp_malloc_sync 00:26:07.042 EAL: No shared files mode enabled, IPC is disabled 00:26:07.042 EAL: Heap on socket 0 was shrunk by 2MB 00:26:07.042 EAL: No shared files mode enabled, IPC is disabled 00:26:07.042 EAL: No shared files mode enabled, IPC is disabled 00:26:07.042 EAL: No shared files mode enabled, IPC is disabled 00:26:07.042 00:26:07.042 real 0m0.854s 00:26:07.042 user 0m0.431s 00:26:07.042 sys 0m0.296s 00:26:07.042 20:25:02 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:07.042 ************************************ 00:26:07.042 END TEST env_vtophys 00:26:07.042 ************************************ 00:26:07.042 20:25:02 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:26:07.042 20:25:02 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:26:07.042 20:25:02 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:07.042 20:25:02 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:07.042 20:25:02 env -- common/autotest_common.sh@10 -- # set +x 00:26:07.302 ************************************ 00:26:07.302 START TEST env_pci 00:26:07.302 ************************************ 00:26:07.302 20:25:02 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:26:07.302 00:26:07.302 00:26:07.302 CUnit - A unit testing framework for C - Version 2.1-3 00:26:07.302 http://cunit.sourceforge.net/ 00:26:07.302 00:26:07.302 00:26:07.302 Suite: pci 00:26:07.302 Test: pci_hook ...[2024-10-01 20:25:02.310449] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58501 has claimed it 00:26:07.302 passed 00:26:07.302 00:26:07.302 Run Summary: Type Total Ran Passed Failed Inactive 00:26:07.302 suites 1 1 n/a 0 0 00:26:07.302 tests 1 1 1 0 0 00:26:07.302 asserts 25 25 25 0 n/a 00:26:07.302 00:26:07.302 Elapsed time = 0.002 seconds 00:26:07.302 EAL: Cannot find device (10000:00:01.0) 00:26:07.302 EAL: Failed to attach device on primary process 00:26:07.302 ************************************ 00:26:07.302 END TEST env_pci 00:26:07.302 ************************************ 00:26:07.302 00:26:07.302 real 0m0.016s 00:26:07.302 user 0m0.008s 00:26:07.302 sys 0m0.008s 00:26:07.302 20:25:02 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:07.302 20:25:02 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:26:07.302 20:25:02 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:26:07.302 20:25:02 env -- env/env.sh@15 -- # uname 00:26:07.302 20:25:02 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:26:07.302 20:25:02 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:26:07.302 20:25:02 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:26:07.302 20:25:02 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:26:07.302 20:25:02 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:07.302 20:25:02 env -- common/autotest_common.sh@10 -- # set +x 00:26:07.302 ************************************ 00:26:07.302 START TEST env_dpdk_post_init 00:26:07.302 ************************************ 00:26:07.302 20:25:02 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:26:07.302 EAL: Detected CPU lcores: 10 00:26:07.302 EAL: Detected NUMA nodes: 1 00:26:07.302 EAL: Detected shared linkage of DPDK 00:26:07.302 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:26:07.302 EAL: Selected IOVA mode 'PA' 00:26:07.302 TELEMETRY: No legacy callbacks, legacy socket not created 00:26:07.302 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:26:07.302 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:26:07.302 Starting DPDK initialization... 00:26:07.302 Starting SPDK post initialization... 00:26:07.302 SPDK NVMe probe 00:26:07.302 Attaching to 0000:00:10.0 00:26:07.302 Attaching to 0000:00:11.0 00:26:07.302 Attached to 0000:00:10.0 00:26:07.302 Attached to 0000:00:11.0 00:26:07.302 Cleaning up... 00:26:07.302 00:26:07.302 real 0m0.186s 00:26:07.302 user 0m0.055s 00:26:07.302 sys 0m0.030s 00:26:07.302 20:25:02 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:07.302 20:25:02 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:26:07.302 ************************************ 00:26:07.302 END TEST env_dpdk_post_init 00:26:07.302 ************************************ 00:26:07.560 20:25:02 env -- env/env.sh@26 -- # uname 00:26:07.560 20:25:02 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:26:07.560 20:25:02 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:26:07.560 20:25:02 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:07.560 20:25:02 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:07.560 20:25:02 env -- common/autotest_common.sh@10 -- # set +x 00:26:07.560 ************************************ 00:26:07.560 START TEST env_mem_callbacks 00:26:07.560 ************************************ 00:26:07.560 20:25:02 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:26:07.560 EAL: Detected CPU lcores: 10 00:26:07.560 EAL: Detected NUMA nodes: 1 00:26:07.560 EAL: Detected shared linkage of DPDK 00:26:07.560 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:26:07.560 EAL: Selected IOVA mode 'PA' 00:26:07.560 TELEMETRY: No legacy callbacks, legacy socket not created 00:26:07.560 00:26:07.560 00:26:07.560 CUnit - A unit testing framework for C - Version 2.1-3 00:26:07.560 http://cunit.sourceforge.net/ 00:26:07.560 00:26:07.560 00:26:07.560 Suite: memory 00:26:07.560 Test: test ... 00:26:07.560 register 0x200000200000 2097152 00:26:07.560 malloc 3145728 00:26:07.560 register 0x200000400000 4194304 00:26:07.560 buf 0x200000500000 len 3145728 PASSED 00:26:07.560 malloc 64 00:26:07.560 buf 0x2000004fff40 len 64 PASSED 00:26:07.560 malloc 4194304 00:26:07.560 register 0x200000800000 6291456 00:26:07.560 buf 0x200000a00000 len 4194304 PASSED 00:26:07.560 free 0x200000500000 3145728 00:26:07.560 free 0x2000004fff40 64 00:26:07.560 unregister 0x200000400000 4194304 PASSED 00:26:07.560 free 0x200000a00000 4194304 00:26:07.560 unregister 0x200000800000 6291456 PASSED 00:26:07.560 malloc 8388608 00:26:07.560 register 0x200000400000 10485760 00:26:07.560 buf 0x200000600000 len 8388608 PASSED 00:26:07.560 free 0x200000600000 8388608 00:26:07.560 unregister 0x200000400000 10485760 PASSED 00:26:07.560 passed 00:26:07.560 00:26:07.560 Run Summary: Type Total Ran Passed Failed Inactive 00:26:07.560 suites 1 1 n/a 0 0 00:26:07.560 tests 1 1 1 0 0 00:26:07.560 asserts 15 15 15 0 n/a 00:26:07.560 00:26:07.560 Elapsed time = 0.006 seconds 00:26:07.560 00:26:07.560 real 0m0.146s 00:26:07.560 user 0m0.021s 00:26:07.560 sys 0m0.022s 00:26:07.560 20:25:02 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:07.560 20:25:02 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:26:07.560 ************************************ 00:26:07.560 END TEST env_mem_callbacks 00:26:07.560 ************************************ 00:26:07.560 00:26:07.560 real 0m1.856s 00:26:07.560 user 0m0.921s 00:26:07.560 sys 0m0.594s 00:26:07.560 20:25:02 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:07.560 20:25:02 env -- common/autotest_common.sh@10 -- # set +x 00:26:07.560 ************************************ 00:26:07.560 END TEST env 00:26:07.560 ************************************ 00:26:07.818 20:25:02 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:26:07.818 20:25:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:07.818 20:25:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:07.818 20:25:02 -- common/autotest_common.sh@10 -- # set +x 00:26:07.818 ************************************ 00:26:07.818 START TEST rpc 00:26:07.818 ************************************ 00:26:07.818 20:25:02 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:26:07.818 * Looking for test storage... 00:26:07.818 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:26:07.818 20:25:02 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:07.818 20:25:02 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:26:07.818 20:25:02 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:07.818 20:25:02 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:07.818 20:25:02 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:07.818 20:25:02 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:07.818 20:25:02 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:07.818 20:25:02 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:26:07.818 20:25:02 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:26:07.818 20:25:02 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:26:07.818 20:25:02 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:26:07.818 20:25:02 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:26:07.818 20:25:02 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:26:07.818 20:25:02 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:26:07.818 20:25:02 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:07.818 20:25:03 rpc -- scripts/common.sh@344 -- # case "$op" in 00:26:07.818 20:25:03 rpc -- scripts/common.sh@345 -- # : 1 00:26:07.818 20:25:03 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:07.818 20:25:03 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:07.818 20:25:03 rpc -- scripts/common.sh@365 -- # decimal 1 00:26:07.818 20:25:03 rpc -- scripts/common.sh@353 -- # local d=1 00:26:07.818 20:25:03 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:07.818 20:25:03 rpc -- scripts/common.sh@355 -- # echo 1 00:26:07.818 20:25:03 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:07.818 20:25:03 rpc -- scripts/common.sh@366 -- # decimal 2 00:26:07.818 20:25:03 rpc -- scripts/common.sh@353 -- # local d=2 00:26:07.818 20:25:03 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:07.818 20:25:03 rpc -- scripts/common.sh@355 -- # echo 2 00:26:07.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:07.818 20:25:03 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:07.818 20:25:03 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:07.818 20:25:03 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:07.818 20:25:03 rpc -- scripts/common.sh@368 -- # return 0 00:26:07.818 20:25:03 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:07.818 20:25:03 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:07.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.818 --rc genhtml_branch_coverage=1 00:26:07.818 --rc genhtml_function_coverage=1 00:26:07.818 --rc genhtml_legend=1 00:26:07.818 --rc geninfo_all_blocks=1 00:26:07.818 --rc geninfo_unexecuted_blocks=1 00:26:07.818 00:26:07.818 ' 00:26:07.818 20:25:03 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:07.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.818 --rc genhtml_branch_coverage=1 00:26:07.818 --rc genhtml_function_coverage=1 00:26:07.818 --rc genhtml_legend=1 00:26:07.818 --rc geninfo_all_blocks=1 00:26:07.818 --rc geninfo_unexecuted_blocks=1 00:26:07.818 00:26:07.818 ' 00:26:07.818 20:25:03 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:07.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.818 --rc genhtml_branch_coverage=1 00:26:07.818 --rc genhtml_function_coverage=1 00:26:07.818 --rc genhtml_legend=1 00:26:07.818 --rc geninfo_all_blocks=1 00:26:07.818 --rc geninfo_unexecuted_blocks=1 00:26:07.818 00:26:07.818 ' 00:26:07.818 20:25:03 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:07.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.818 --rc genhtml_branch_coverage=1 00:26:07.819 --rc genhtml_function_coverage=1 00:26:07.819 --rc genhtml_legend=1 00:26:07.819 --rc geninfo_all_blocks=1 00:26:07.819 --rc geninfo_unexecuted_blocks=1 00:26:07.819 00:26:07.819 ' 00:26:07.819 20:25:03 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58624 00:26:07.819 20:25:03 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:26:07.819 20:25:03 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:26:07.819 20:25:03 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58624 00:26:07.819 20:25:03 rpc -- common/autotest_common.sh@831 -- # '[' -z 58624 ']' 00:26:07.819 20:25:03 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:07.819 20:25:03 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:07.819 20:25:03 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:07.819 20:25:03 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:07.819 20:25:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:26:08.076 [2024-10-01 20:25:03.084503] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:26:08.076 [2024-10-01 20:25:03.084808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58624 ] 00:26:08.076 [2024-10-01 20:25:03.216470] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.076 [2024-10-01 20:25:03.275373] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:26:08.076 [2024-10-01 20:25:03.275632] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58624' to capture a snapshot of events at runtime. 00:26:08.076 [2024-10-01 20:25:03.275763] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:08.076 [2024-10-01 20:25:03.275820] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:08.076 [2024-10-01 20:25:03.275935] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58624 for offline analysis/debug. 00:26:08.076 [2024-10-01 20:25:03.276025] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:08.335 20:25:03 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:08.335 20:25:03 rpc -- common/autotest_common.sh@864 -- # return 0 00:26:08.335 20:25:03 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:26:08.335 20:25:03 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:26:08.335 20:25:03 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:26:08.335 20:25:03 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:26:08.335 20:25:03 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:08.335 20:25:03 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:08.335 20:25:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:26:08.335 ************************************ 00:26:08.335 START TEST rpc_integrity 00:26:08.335 ************************************ 00:26:08.335 20:25:03 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:26:08.335 20:25:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:26:08.335 20:25:03 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.335 20:25:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:08.335 20:25:03 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.335 20:25:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:26:08.335 20:25:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:26:08.593 20:25:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:26:08.593 20:25:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:26:08.593 20:25:03 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.593 20:25:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:08.593 20:25:03 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.593 20:25:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:26:08.593 20:25:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:26:08.593 20:25:03 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.593 20:25:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:08.593 20:25:03 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.593 20:25:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:26:08.593 { 00:26:08.593 "aliases": [ 00:26:08.593 "24bcdf32-b274-46ce-909f-31357df9aa94" 00:26:08.593 ], 00:26:08.593 "assigned_rate_limits": { 00:26:08.593 "r_mbytes_per_sec": 0, 00:26:08.593 "rw_ios_per_sec": 0, 00:26:08.593 "rw_mbytes_per_sec": 0, 00:26:08.593 "w_mbytes_per_sec": 0 00:26:08.593 }, 00:26:08.593 "block_size": 512, 00:26:08.593 "claimed": false, 00:26:08.593 "driver_specific": {}, 00:26:08.593 "memory_domains": [ 00:26:08.593 { 00:26:08.593 "dma_device_id": "system", 00:26:08.593 "dma_device_type": 1 00:26:08.593 }, 00:26:08.593 { 00:26:08.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:08.593 "dma_device_type": 2 00:26:08.593 } 00:26:08.593 ], 00:26:08.593 "name": "Malloc0", 00:26:08.593 "num_blocks": 16384, 00:26:08.593 "product_name": "Malloc disk", 00:26:08.593 "supported_io_types": { 00:26:08.593 "abort": true, 00:26:08.593 "compare": false, 00:26:08.593 "compare_and_write": false, 00:26:08.593 "copy": true, 00:26:08.593 "flush": true, 00:26:08.593 "get_zone_info": false, 00:26:08.593 "nvme_admin": false, 00:26:08.593 "nvme_io": false, 00:26:08.593 "nvme_io_md": false, 00:26:08.593 "nvme_iov_md": false, 00:26:08.593 "read": true, 00:26:08.593 "reset": true, 00:26:08.593 "seek_data": false, 00:26:08.593 "seek_hole": false, 00:26:08.593 "unmap": true, 00:26:08.593 "write": true, 00:26:08.593 "write_zeroes": true, 00:26:08.593 "zcopy": true, 00:26:08.593 "zone_append": false, 00:26:08.593 "zone_management": false 00:26:08.593 }, 00:26:08.593 "uuid": "24bcdf32-b274-46ce-909f-31357df9aa94", 00:26:08.593 "zoned": false 00:26:08.593 } 00:26:08.593 ]' 00:26:08.593 20:25:03 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:26:08.593 20:25:03 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:26:08.593 20:25:03 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:26:08.593 20:25:03 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.593 20:25:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:08.593 [2024-10-01 20:25:03.680355] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:26:08.593 [2024-10-01 20:25:03.680413] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:08.593 [2024-10-01 20:25:03.680434] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xafb5e0 00:26:08.593 [2024-10-01 20:25:03.680444] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:08.593 [2024-10-01 20:25:03.682076] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:08.593 [2024-10-01 20:25:03.682258] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:26:08.593 Passthru0 00:26:08.593 20:25:03 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.593 20:25:03 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:26:08.593 20:25:03 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.593 20:25:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:08.593 20:25:03 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.593 20:25:03 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:26:08.594 { 00:26:08.594 "aliases": [ 00:26:08.594 "24bcdf32-b274-46ce-909f-31357df9aa94" 00:26:08.594 ], 00:26:08.594 "assigned_rate_limits": { 00:26:08.594 "r_mbytes_per_sec": 0, 00:26:08.594 "rw_ios_per_sec": 0, 00:26:08.594 "rw_mbytes_per_sec": 0, 00:26:08.594 "w_mbytes_per_sec": 0 00:26:08.594 }, 00:26:08.594 "block_size": 512, 00:26:08.594 "claim_type": "exclusive_write", 00:26:08.594 "claimed": true, 00:26:08.594 "driver_specific": {}, 00:26:08.594 "memory_domains": [ 00:26:08.594 { 00:26:08.594 "dma_device_id": "system", 00:26:08.594 "dma_device_type": 1 00:26:08.594 }, 00:26:08.594 { 00:26:08.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:08.594 "dma_device_type": 2 00:26:08.594 } 00:26:08.594 ], 00:26:08.594 "name": "Malloc0", 00:26:08.594 "num_blocks": 16384, 00:26:08.594 "product_name": "Malloc disk", 00:26:08.594 "supported_io_types": { 00:26:08.594 "abort": true, 00:26:08.594 "compare": false, 00:26:08.594 "compare_and_write": false, 00:26:08.594 "copy": true, 00:26:08.594 "flush": true, 00:26:08.594 "get_zone_info": false, 00:26:08.594 "nvme_admin": false, 00:26:08.594 "nvme_io": false, 00:26:08.594 "nvme_io_md": false, 00:26:08.594 "nvme_iov_md": false, 00:26:08.594 "read": true, 00:26:08.594 "reset": true, 00:26:08.594 "seek_data": false, 00:26:08.594 "seek_hole": false, 00:26:08.594 "unmap": true, 00:26:08.594 "write": true, 00:26:08.594 "write_zeroes": true, 00:26:08.594 "zcopy": true, 00:26:08.594 "zone_append": false, 00:26:08.594 "zone_management": false 00:26:08.594 }, 00:26:08.594 "uuid": "24bcdf32-b274-46ce-909f-31357df9aa94", 00:26:08.594 "zoned": false 00:26:08.594 }, 00:26:08.594 { 00:26:08.594 "aliases": [ 00:26:08.594 "3003f0fa-3180-5fa1-9b06-a1fa8484f96f" 00:26:08.594 ], 00:26:08.594 "assigned_rate_limits": { 00:26:08.594 "r_mbytes_per_sec": 0, 00:26:08.594 "rw_ios_per_sec": 0, 00:26:08.594 "rw_mbytes_per_sec": 0, 00:26:08.594 "w_mbytes_per_sec": 0 00:26:08.594 }, 00:26:08.594 "block_size": 512, 00:26:08.594 "claimed": false, 00:26:08.594 "driver_specific": { 00:26:08.594 "passthru": { 00:26:08.594 "base_bdev_name": "Malloc0", 00:26:08.594 "name": "Passthru0" 00:26:08.594 } 00:26:08.594 }, 00:26:08.594 "memory_domains": [ 00:26:08.594 { 00:26:08.594 "dma_device_id": "system", 00:26:08.594 "dma_device_type": 1 00:26:08.594 }, 00:26:08.594 { 00:26:08.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:08.594 "dma_device_type": 2 00:26:08.594 } 00:26:08.594 ], 00:26:08.594 "name": "Passthru0", 00:26:08.594 "num_blocks": 16384, 00:26:08.594 "product_name": "passthru", 00:26:08.594 "supported_io_types": { 00:26:08.594 "abort": true, 00:26:08.594 "compare": false, 00:26:08.594 "compare_and_write": false, 00:26:08.594 "copy": true, 00:26:08.594 "flush": true, 00:26:08.594 "get_zone_info": false, 00:26:08.594 "nvme_admin": false, 00:26:08.594 "nvme_io": false, 00:26:08.594 "nvme_io_md": false, 00:26:08.594 "nvme_iov_md": false, 00:26:08.594 "read": true, 00:26:08.594 "reset": true, 00:26:08.594 "seek_data": false, 00:26:08.594 "seek_hole": false, 00:26:08.594 "unmap": true, 00:26:08.594 "write": true, 00:26:08.594 "write_zeroes": true, 00:26:08.594 "zcopy": true, 00:26:08.594 "zone_append": false, 00:26:08.594 "zone_management": false 00:26:08.594 }, 00:26:08.594 "uuid": "3003f0fa-3180-5fa1-9b06-a1fa8484f96f", 00:26:08.594 "zoned": false 00:26:08.594 } 00:26:08.594 ]' 00:26:08.594 20:25:03 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:26:08.594 20:25:03 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:26:08.594 20:25:03 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:26:08.594 20:25:03 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.594 20:25:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:08.594 20:25:03 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.594 20:25:03 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:08.594 20:25:03 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.594 20:25:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:08.594 20:25:03 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.594 20:25:03 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:26:08.594 20:25:03 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.594 20:25:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:08.594 20:25:03 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.594 20:25:03 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:26:08.594 20:25:03 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:26:08.594 ************************************ 00:26:08.594 END TEST rpc_integrity 00:26:08.594 ************************************ 00:26:08.594 20:25:03 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:26:08.594 00:26:08.594 real 0m0.327s 00:26:08.594 user 0m0.214s 00:26:08.594 sys 0m0.042s 00:26:08.594 20:25:03 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:08.594 20:25:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:08.852 20:25:03 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:26:08.852 20:25:03 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:08.852 20:25:03 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:08.852 20:25:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:26:08.852 ************************************ 00:26:08.852 START TEST rpc_plugins 00:26:08.852 ************************************ 00:26:08.852 20:25:03 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:26:08.852 20:25:03 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:26:08.852 20:25:03 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.852 20:25:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:26:08.852 20:25:03 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.852 20:25:03 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:26:08.852 20:25:03 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:26:08.852 20:25:03 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.852 20:25:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:26:08.852 20:25:03 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.852 20:25:03 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:26:08.852 { 00:26:08.852 "aliases": [ 00:26:08.852 "ac2ea577-e4b4-4b4e-838c-9b445604abd4" 00:26:08.852 ], 00:26:08.852 "assigned_rate_limits": { 00:26:08.852 "r_mbytes_per_sec": 0, 00:26:08.852 "rw_ios_per_sec": 0, 00:26:08.852 "rw_mbytes_per_sec": 0, 00:26:08.852 "w_mbytes_per_sec": 0 00:26:08.852 }, 00:26:08.852 "block_size": 4096, 00:26:08.852 "claimed": false, 00:26:08.852 "driver_specific": {}, 00:26:08.852 "memory_domains": [ 00:26:08.852 { 00:26:08.852 "dma_device_id": "system", 00:26:08.852 "dma_device_type": 1 00:26:08.852 }, 00:26:08.852 { 00:26:08.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:08.852 "dma_device_type": 2 00:26:08.852 } 00:26:08.853 ], 00:26:08.853 "name": "Malloc1", 00:26:08.853 "num_blocks": 256, 00:26:08.853 "product_name": "Malloc disk", 00:26:08.853 "supported_io_types": { 00:26:08.853 "abort": true, 00:26:08.853 "compare": false, 00:26:08.853 "compare_and_write": false, 00:26:08.853 "copy": true, 00:26:08.853 "flush": true, 00:26:08.853 "get_zone_info": false, 00:26:08.853 "nvme_admin": false, 00:26:08.853 "nvme_io": false, 00:26:08.853 "nvme_io_md": false, 00:26:08.853 "nvme_iov_md": false, 00:26:08.853 "read": true, 00:26:08.853 "reset": true, 00:26:08.853 "seek_data": false, 00:26:08.853 "seek_hole": false, 00:26:08.853 "unmap": true, 00:26:08.853 "write": true, 00:26:08.853 "write_zeroes": true, 00:26:08.853 "zcopy": true, 00:26:08.853 "zone_append": false, 00:26:08.853 "zone_management": false 00:26:08.853 }, 00:26:08.853 "uuid": "ac2ea577-e4b4-4b4e-838c-9b445604abd4", 00:26:08.853 "zoned": false 00:26:08.853 } 00:26:08.853 ]' 00:26:08.853 20:25:03 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:26:08.853 20:25:03 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:26:08.853 20:25:03 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:26:08.853 20:25:03 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.853 20:25:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:26:08.853 20:25:03 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.853 20:25:03 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:26:08.853 20:25:03 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.853 20:25:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:26:08.853 20:25:03 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.853 20:25:03 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:26:08.853 20:25:03 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:26:08.853 ************************************ 00:26:08.853 END TEST rpc_plugins 00:26:08.853 ************************************ 00:26:08.853 20:25:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:26:08.853 00:26:08.853 real 0m0.167s 00:26:08.853 user 0m0.110s 00:26:08.853 sys 0m0.012s 00:26:08.853 20:25:04 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:08.853 20:25:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:26:08.853 20:25:04 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:26:08.853 20:25:04 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:08.853 20:25:04 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:08.853 20:25:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:26:08.853 ************************************ 00:26:08.853 START TEST rpc_trace_cmd_test 00:26:08.853 ************************************ 00:26:08.853 20:25:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:26:08.853 20:25:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:26:08.853 20:25:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:26:08.853 20:25:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.853 20:25:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.111 20:25:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.111 20:25:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:26:09.111 "bdev": { 00:26:09.111 "mask": "0x8", 00:26:09.111 "tpoint_mask": "0xffffffffffffffff" 00:26:09.111 }, 00:26:09.111 "bdev_nvme": { 00:26:09.111 "mask": "0x4000", 00:26:09.111 "tpoint_mask": "0x0" 00:26:09.111 }, 00:26:09.111 "bdev_raid": { 00:26:09.111 "mask": "0x20000", 00:26:09.111 "tpoint_mask": "0x0" 00:26:09.111 }, 00:26:09.111 "blob": { 00:26:09.111 "mask": "0x10000", 00:26:09.111 "tpoint_mask": "0x0" 00:26:09.111 }, 00:26:09.111 "blobfs": { 00:26:09.111 "mask": "0x80", 00:26:09.111 "tpoint_mask": "0x0" 00:26:09.111 }, 00:26:09.111 "dsa": { 00:26:09.111 "mask": "0x200", 00:26:09.111 "tpoint_mask": "0x0" 00:26:09.111 }, 00:26:09.111 "ftl": { 00:26:09.111 "mask": "0x40", 00:26:09.111 "tpoint_mask": "0x0" 00:26:09.111 }, 00:26:09.111 "iaa": { 00:26:09.111 "mask": "0x1000", 00:26:09.111 "tpoint_mask": "0x0" 00:26:09.111 }, 00:26:09.111 "iscsi_conn": { 00:26:09.111 "mask": "0x2", 00:26:09.111 "tpoint_mask": "0x0" 00:26:09.111 }, 00:26:09.111 "nvme_pcie": { 00:26:09.111 "mask": "0x800", 00:26:09.111 "tpoint_mask": "0x0" 00:26:09.111 }, 00:26:09.111 "nvme_tcp": { 00:26:09.111 "mask": "0x2000", 00:26:09.111 "tpoint_mask": "0x0" 00:26:09.111 }, 00:26:09.111 "nvmf_rdma": { 00:26:09.112 "mask": "0x10", 00:26:09.112 "tpoint_mask": "0x0" 00:26:09.112 }, 00:26:09.112 "nvmf_tcp": { 00:26:09.112 "mask": "0x20", 00:26:09.112 "tpoint_mask": "0x0" 00:26:09.112 }, 00:26:09.112 "scsi": { 00:26:09.112 "mask": "0x4", 00:26:09.112 "tpoint_mask": "0x0" 00:26:09.112 }, 00:26:09.112 "sock": { 00:26:09.112 "mask": "0x8000", 00:26:09.112 "tpoint_mask": "0x0" 00:26:09.112 }, 00:26:09.112 "thread": { 00:26:09.112 "mask": "0x400", 00:26:09.112 "tpoint_mask": "0x0" 00:26:09.112 }, 00:26:09.112 "tpoint_group_mask": "0x8", 00:26:09.112 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58624" 00:26:09.112 }' 00:26:09.112 20:25:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:26:09.112 20:25:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:26:09.112 20:25:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:26:09.112 20:25:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:26:09.112 20:25:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:26:09.112 20:25:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:26:09.112 20:25:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:26:09.112 20:25:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:26:09.112 20:25:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:26:09.112 ************************************ 00:26:09.112 END TEST rpc_trace_cmd_test 00:26:09.112 ************************************ 00:26:09.112 20:25:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:26:09.112 00:26:09.112 real 0m0.246s 00:26:09.112 user 0m0.203s 00:26:09.112 sys 0m0.032s 00:26:09.112 20:25:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:09.112 20:25:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.370 20:25:04 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:26:09.370 20:25:04 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:26:09.370 20:25:04 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:09.370 20:25:04 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:09.370 20:25:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:26:09.370 ************************************ 00:26:09.370 START TEST go_rpc 00:26:09.370 ************************************ 00:26:09.370 20:25:04 rpc.go_rpc -- common/autotest_common.sh@1125 -- # go_rpc 00:26:09.370 20:25:04 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:26:09.371 20:25:04 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:26:09.371 20:25:04 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:26:09.371 20:25:04 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:26:09.371 20:25:04 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:26:09.371 20:25:04 rpc.go_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.371 20:25:04 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:09.371 20:25:04 rpc.go_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.371 20:25:04 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:26:09.371 20:25:04 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:26:09.371 20:25:04 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["58578068-e9cf-4a85-bd8f-6124de1be3d8"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"58578068-e9cf-4a85-bd8f-6124de1be3d8","zoned":false}]' 00:26:09.371 20:25:04 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:26:09.371 20:25:04 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:26:09.371 20:25:04 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:26:09.371 20:25:04 rpc.go_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.371 20:25:04 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:09.371 20:25:04 rpc.go_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.371 20:25:04 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:26:09.371 20:25:04 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:26:09.371 20:25:04 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:26:09.371 ************************************ 00:26:09.371 END TEST go_rpc 00:26:09.371 ************************************ 00:26:09.371 20:25:04 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:26:09.371 00:26:09.371 real 0m0.228s 00:26:09.371 user 0m0.159s 00:26:09.371 sys 0m0.036s 00:26:09.371 20:25:04 rpc.go_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:09.371 20:25:04 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:09.630 20:25:04 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:26:09.630 20:25:04 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:26:09.630 20:25:04 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:09.630 20:25:04 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:09.630 20:25:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:26:09.630 ************************************ 00:26:09.630 START TEST rpc_daemon_integrity 00:26:09.630 ************************************ 00:26:09.630 20:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:26:09.630 20:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:26:09.630 20:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.630 20:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:09.630 20:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.630 20:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:26:09.630 20:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:26:09.630 20:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:26:09.630 20:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:26:09.630 20:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.630 20:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:09.630 20:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.630 20:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:26:09.630 20:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:26:09.630 20:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.630 20:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:09.631 20:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.631 20:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:26:09.631 { 00:26:09.631 "aliases": [ 00:26:09.631 "6b22f491-e78a-4894-a476-0bb21d9e8f98" 00:26:09.631 ], 00:26:09.631 "assigned_rate_limits": { 00:26:09.631 "r_mbytes_per_sec": 0, 00:26:09.631 "rw_ios_per_sec": 0, 00:26:09.631 "rw_mbytes_per_sec": 0, 00:26:09.631 "w_mbytes_per_sec": 0 00:26:09.631 }, 00:26:09.631 "block_size": 512, 00:26:09.631 "claimed": false, 00:26:09.631 "driver_specific": {}, 00:26:09.631 "memory_domains": [ 00:26:09.631 { 00:26:09.631 "dma_device_id": "system", 00:26:09.631 "dma_device_type": 1 00:26:09.631 }, 00:26:09.631 { 00:26:09.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:09.631 "dma_device_type": 2 00:26:09.631 } 00:26:09.631 ], 00:26:09.631 "name": "Malloc3", 00:26:09.631 "num_blocks": 16384, 00:26:09.631 "product_name": "Malloc disk", 00:26:09.631 "supported_io_types": { 00:26:09.631 "abort": true, 00:26:09.631 "compare": false, 00:26:09.631 "compare_and_write": false, 00:26:09.631 "copy": true, 00:26:09.631 "flush": true, 00:26:09.631 "get_zone_info": false, 00:26:09.631 "nvme_admin": false, 00:26:09.631 "nvme_io": false, 00:26:09.631 "nvme_io_md": false, 00:26:09.631 "nvme_iov_md": false, 00:26:09.631 "read": true, 00:26:09.631 "reset": true, 00:26:09.631 "seek_data": false, 00:26:09.631 "seek_hole": false, 00:26:09.631 "unmap": true, 00:26:09.631 "write": true, 00:26:09.631 "write_zeroes": true, 00:26:09.631 "zcopy": true, 00:26:09.631 "zone_append": false, 00:26:09.631 "zone_management": false 00:26:09.631 }, 00:26:09.631 "uuid": "6b22f491-e78a-4894-a476-0bb21d9e8f98", 00:26:09.631 "zoned": false 00:26:09.631 } 00:26:09.631 ]' 00:26:09.631 20:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:26:09.631 20:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:26:09.631 20:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:26:09.631 20:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.631 20:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:09.631 [2024-10-01 20:25:04.816749] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:26:09.631 [2024-10-01 20:25:04.816806] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:09.631 [2024-10-01 20:25:04.816829] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xafcfa0 00:26:09.631 [2024-10-01 20:25:04.816839] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:09.631 [2024-10-01 20:25:04.818538] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:09.631 [2024-10-01 20:25:04.818578] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:26:09.631 Passthru0 00:26:09.631 20:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.631 20:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:26:09.631 20:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.631 20:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:09.631 20:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.631 20:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:26:09.631 { 00:26:09.631 "aliases": [ 00:26:09.631 "6b22f491-e78a-4894-a476-0bb21d9e8f98" 00:26:09.631 ], 00:26:09.631 "assigned_rate_limits": { 00:26:09.631 "r_mbytes_per_sec": 0, 00:26:09.631 "rw_ios_per_sec": 0, 00:26:09.631 "rw_mbytes_per_sec": 0, 00:26:09.631 "w_mbytes_per_sec": 0 00:26:09.631 }, 00:26:09.631 "block_size": 512, 00:26:09.631 "claim_type": "exclusive_write", 00:26:09.631 "claimed": true, 00:26:09.631 "driver_specific": {}, 00:26:09.631 "memory_domains": [ 00:26:09.631 { 00:26:09.631 "dma_device_id": "system", 00:26:09.631 "dma_device_type": 1 00:26:09.631 }, 00:26:09.631 { 00:26:09.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:09.631 "dma_device_type": 2 00:26:09.631 } 00:26:09.631 ], 00:26:09.631 "name": "Malloc3", 00:26:09.631 "num_blocks": 16384, 00:26:09.631 "product_name": "Malloc disk", 00:26:09.631 "supported_io_types": { 00:26:09.631 "abort": true, 00:26:09.631 "compare": false, 00:26:09.631 "compare_and_write": false, 00:26:09.631 "copy": true, 00:26:09.631 "flush": true, 00:26:09.631 "get_zone_info": false, 00:26:09.631 "nvme_admin": false, 00:26:09.631 "nvme_io": false, 00:26:09.631 "nvme_io_md": false, 00:26:09.631 "nvme_iov_md": false, 00:26:09.631 "read": true, 00:26:09.631 "reset": true, 00:26:09.631 "seek_data": false, 00:26:09.631 "seek_hole": false, 00:26:09.631 "unmap": true, 00:26:09.631 "write": true, 00:26:09.631 "write_zeroes": true, 00:26:09.631 "zcopy": true, 00:26:09.631 "zone_append": false, 00:26:09.631 "zone_management": false 00:26:09.631 }, 00:26:09.631 "uuid": "6b22f491-e78a-4894-a476-0bb21d9e8f98", 00:26:09.631 "zoned": false 00:26:09.631 }, 00:26:09.631 { 00:26:09.631 "aliases": [ 00:26:09.631 "34e7c50b-f323-53be-a0bc-6fe99ef82503" 00:26:09.631 ], 00:26:09.631 "assigned_rate_limits": { 00:26:09.631 "r_mbytes_per_sec": 0, 00:26:09.631 "rw_ios_per_sec": 0, 00:26:09.631 "rw_mbytes_per_sec": 0, 00:26:09.631 "w_mbytes_per_sec": 0 00:26:09.631 }, 00:26:09.631 "block_size": 512, 00:26:09.631 "claimed": false, 00:26:09.631 "driver_specific": { 00:26:09.631 "passthru": { 00:26:09.631 "base_bdev_name": "Malloc3", 00:26:09.631 "name": "Passthru0" 00:26:09.631 } 00:26:09.631 }, 00:26:09.631 "memory_domains": [ 00:26:09.631 { 00:26:09.631 "dma_device_id": "system", 00:26:09.631 "dma_device_type": 1 00:26:09.631 }, 00:26:09.631 { 00:26:09.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:09.631 "dma_device_type": 2 00:26:09.631 } 00:26:09.631 ], 00:26:09.631 "name": "Passthru0", 00:26:09.631 "num_blocks": 16384, 00:26:09.631 "product_name": "passthru", 00:26:09.631 "supported_io_types": { 00:26:09.631 "abort": true, 00:26:09.631 "compare": false, 00:26:09.631 "compare_and_write": false, 00:26:09.631 "copy": true, 00:26:09.631 "flush": true, 00:26:09.631 "get_zone_info": false, 00:26:09.631 "nvme_admin": false, 00:26:09.631 "nvme_io": false, 00:26:09.631 "nvme_io_md": false, 00:26:09.631 "nvme_iov_md": false, 00:26:09.631 "read": true, 00:26:09.631 "reset": true, 00:26:09.631 "seek_data": false, 00:26:09.631 "seek_hole": false, 00:26:09.631 "unmap": true, 00:26:09.631 "write": true, 00:26:09.631 "write_zeroes": true, 00:26:09.631 "zcopy": true, 00:26:09.631 "zone_append": false, 00:26:09.631 "zone_management": false 00:26:09.631 }, 00:26:09.631 "uuid": "34e7c50b-f323-53be-a0bc-6fe99ef82503", 00:26:09.631 "zoned": false 00:26:09.631 } 00:26:09.631 ]' 00:26:09.631 20:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:26:09.889 20:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:26:09.889 20:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:26:09.889 20:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.889 20:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:09.889 20:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.889 20:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:26:09.889 20:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.889 20:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:09.889 20:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.889 20:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:26:09.889 20:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.889 20:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:09.889 20:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.889 20:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:26:09.889 20:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:26:09.889 ************************************ 00:26:09.889 END TEST rpc_daemon_integrity 00:26:09.889 ************************************ 00:26:09.889 20:25:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:26:09.889 00:26:09.889 real 0m0.321s 00:26:09.889 user 0m0.218s 00:26:09.889 sys 0m0.041s 00:26:09.889 20:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:09.889 20:25:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:26:09.889 20:25:05 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:26:09.889 20:25:05 rpc -- rpc/rpc.sh@84 -- # killprocess 58624 00:26:09.889 20:25:05 rpc -- common/autotest_common.sh@950 -- # '[' -z 58624 ']' 00:26:09.889 20:25:05 rpc -- common/autotest_common.sh@954 -- # kill -0 58624 00:26:09.889 20:25:05 rpc -- common/autotest_common.sh@955 -- # uname 00:26:09.889 20:25:05 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:09.889 20:25:05 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58624 00:26:09.889 killing process with pid 58624 00:26:09.889 20:25:05 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:09.889 20:25:05 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:09.889 20:25:05 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58624' 00:26:09.889 20:25:05 rpc -- common/autotest_common.sh@969 -- # kill 58624 00:26:09.889 20:25:05 rpc -- common/autotest_common.sh@974 -- # wait 58624 00:26:10.146 00:26:10.146 real 0m2.543s 00:26:10.146 user 0m3.392s 00:26:10.146 sys 0m0.659s 00:26:10.146 20:25:05 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:10.146 20:25:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:26:10.146 ************************************ 00:26:10.146 END TEST rpc 00:26:10.146 ************************************ 00:26:10.404 20:25:05 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:26:10.404 20:25:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:10.404 20:25:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:10.404 20:25:05 -- common/autotest_common.sh@10 -- # set +x 00:26:10.404 ************************************ 00:26:10.404 START TEST skip_rpc 00:26:10.404 ************************************ 00:26:10.404 20:25:05 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:26:10.404 * Looking for test storage... 00:26:10.404 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:26:10.404 20:25:05 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:10.404 20:25:05 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:26:10.404 20:25:05 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:10.404 20:25:05 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:10.404 20:25:05 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:10.404 20:25:05 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:10.404 20:25:05 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:10.404 20:25:05 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:26:10.404 20:25:05 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:26:10.404 20:25:05 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:26:10.404 20:25:05 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:26:10.404 20:25:05 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:26:10.404 20:25:05 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:26:10.404 20:25:05 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:26:10.404 20:25:05 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:10.404 20:25:05 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:26:10.404 20:25:05 skip_rpc -- scripts/common.sh@345 -- # : 1 00:26:10.405 20:25:05 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:10.405 20:25:05 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:10.405 20:25:05 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:26:10.405 20:25:05 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:26:10.405 20:25:05 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:10.405 20:25:05 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:26:10.405 20:25:05 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:10.405 20:25:05 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:26:10.405 20:25:05 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:26:10.405 20:25:05 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:10.405 20:25:05 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:26:10.405 20:25:05 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:10.405 20:25:05 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:10.405 20:25:05 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:10.405 20:25:05 skip_rpc -- scripts/common.sh@368 -- # return 0 00:26:10.405 20:25:05 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:10.405 20:25:05 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:10.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.405 --rc genhtml_branch_coverage=1 00:26:10.405 --rc genhtml_function_coverage=1 00:26:10.405 --rc genhtml_legend=1 00:26:10.405 --rc geninfo_all_blocks=1 00:26:10.405 --rc geninfo_unexecuted_blocks=1 00:26:10.405 00:26:10.405 ' 00:26:10.405 20:25:05 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:10.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.405 --rc genhtml_branch_coverage=1 00:26:10.405 --rc genhtml_function_coverage=1 00:26:10.405 --rc genhtml_legend=1 00:26:10.405 --rc geninfo_all_blocks=1 00:26:10.405 --rc geninfo_unexecuted_blocks=1 00:26:10.405 00:26:10.405 ' 00:26:10.405 20:25:05 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:10.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.405 --rc genhtml_branch_coverage=1 00:26:10.405 --rc genhtml_function_coverage=1 00:26:10.405 --rc genhtml_legend=1 00:26:10.405 --rc geninfo_all_blocks=1 00:26:10.405 --rc geninfo_unexecuted_blocks=1 00:26:10.405 00:26:10.405 ' 00:26:10.405 20:25:05 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:10.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.405 --rc genhtml_branch_coverage=1 00:26:10.405 --rc genhtml_function_coverage=1 00:26:10.405 --rc genhtml_legend=1 00:26:10.405 --rc geninfo_all_blocks=1 00:26:10.405 --rc geninfo_unexecuted_blocks=1 00:26:10.405 00:26:10.405 ' 00:26:10.405 20:25:05 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:26:10.405 20:25:05 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:26:10.405 20:25:05 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:26:10.405 20:25:05 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:10.405 20:25:05 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:10.405 20:25:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:10.405 ************************************ 00:26:10.405 START TEST skip_rpc 00:26:10.405 ************************************ 00:26:10.405 20:25:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:26:10.405 20:25:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58880 00:26:10.405 20:25:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:26:10.405 20:25:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:26:10.405 20:25:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:26:10.663 [2024-10-01 20:25:05.661234] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:26:10.663 [2024-10-01 20:25:05.661337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58880 ] 00:26:10.663 [2024-10-01 20:25:05.799524] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.663 [2024-10-01 20:25:05.870588] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:15.986 20:25:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:26:15.986 20:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:26:15.986 20:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:26:15.986 20:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:15.986 20:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:15.986 20:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:15.986 20:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:15.986 20:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:26:15.986 20:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.986 20:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:15.986 2024/10/01 20:25:10 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:26:15.986 20:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:15.986 20:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:26:15.986 20:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:15.986 20:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:15.986 20:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:15.986 20:25:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:26:15.986 20:25:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58880 00:26:15.986 20:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 58880 ']' 00:26:15.986 20:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 58880 00:26:15.986 20:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:26:15.986 20:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:15.986 20:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58880 00:26:15.986 20:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:15.986 killing process with pid 58880 00:26:15.986 20:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:15.986 20:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58880' 00:26:15.986 20:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 58880 00:26:15.986 20:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 58880 00:26:15.986 00:26:15.986 real 0m5.370s 00:26:15.986 user 0m5.011s 00:26:15.986 sys 0m0.241s 00:26:15.986 ************************************ 00:26:15.986 END TEST skip_rpc 00:26:15.986 ************************************ 00:26:15.986 20:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:15.986 20:25:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:15.986 20:25:10 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:26:15.986 20:25:10 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:15.986 20:25:10 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:15.986 20:25:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:15.986 ************************************ 00:26:15.986 START TEST skip_rpc_with_json 00:26:15.986 ************************************ 00:26:15.986 20:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:26:15.986 20:25:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:26:15.986 20:25:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58967 00:26:15.986 20:25:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:26:15.986 20:25:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58967 00:26:15.986 20:25:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:26:15.986 20:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 58967 ']' 00:26:15.986 20:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:15.986 20:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:15.987 20:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:15.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:15.987 20:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:15.987 20:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:26:15.987 [2024-10-01 20:25:11.078990] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:26:15.987 [2024-10-01 20:25:11.079103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58967 ] 00:26:15.987 [2024-10-01 20:25:11.220163] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.246 [2024-10-01 20:25:11.292038] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:16.506 20:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:16.506 20:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:26:16.506 20:25:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:26:16.506 20:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.506 20:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:26:16.506 [2024-10-01 20:25:11.534248] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:26:16.506 2024/10/01 20:25:11 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:26:16.506 request: 00:26:16.506 { 00:26:16.506 "method": "nvmf_get_transports", 00:26:16.506 "params": { 00:26:16.506 "trtype": "tcp" 00:26:16.506 } 00:26:16.506 } 00:26:16.506 Got JSON-RPC error response 00:26:16.506 GoRPCClient: error on JSON-RPC call 00:26:16.506 20:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:16.506 20:25:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:26:16.506 20:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.506 20:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:26:16.506 [2024-10-01 20:25:11.546354] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:16.506 20:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.506 20:25:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:26:16.506 20:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.506 20:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:26:16.506 20:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.506 20:25:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:26:16.506 { 00:26:16.506 "subsystems": [ 00:26:16.506 { 00:26:16.506 "subsystem": "fsdev", 00:26:16.506 "config": [ 00:26:16.506 { 00:26:16.506 "method": "fsdev_set_opts", 00:26:16.506 "params": { 00:26:16.506 "fsdev_io_cache_size": 256, 00:26:16.506 "fsdev_io_pool_size": 65535 00:26:16.506 } 00:26:16.506 } 00:26:16.506 ] 00:26:16.506 }, 00:26:16.506 { 00:26:16.506 "subsystem": "keyring", 00:26:16.506 "config": [] 00:26:16.506 }, 00:26:16.506 { 00:26:16.506 "subsystem": "iobuf", 00:26:16.506 "config": [ 00:26:16.506 { 00:26:16.506 "method": "iobuf_set_options", 00:26:16.506 "params": { 00:26:16.506 "large_bufsize": 135168, 00:26:16.506 "large_pool_count": 1024, 00:26:16.506 "small_bufsize": 8192, 00:26:16.506 "small_pool_count": 8192 00:26:16.506 } 00:26:16.506 } 00:26:16.506 ] 00:26:16.506 }, 00:26:16.506 { 00:26:16.506 "subsystem": "sock", 00:26:16.506 "config": [ 00:26:16.506 { 00:26:16.506 "method": "sock_set_default_impl", 00:26:16.506 "params": { 00:26:16.506 "impl_name": "posix" 00:26:16.506 } 00:26:16.506 }, 00:26:16.506 { 00:26:16.506 "method": "sock_impl_set_options", 00:26:16.506 "params": { 00:26:16.506 "enable_ktls": false, 00:26:16.506 "enable_placement_id": 0, 00:26:16.506 "enable_quickack": false, 00:26:16.506 "enable_recv_pipe": true, 00:26:16.506 "enable_zerocopy_send_client": false, 00:26:16.506 "enable_zerocopy_send_server": true, 00:26:16.506 "impl_name": "ssl", 00:26:16.506 "recv_buf_size": 4096, 00:26:16.506 "send_buf_size": 4096, 00:26:16.506 "tls_version": 0, 00:26:16.506 "zerocopy_threshold": 0 00:26:16.506 } 00:26:16.506 }, 00:26:16.506 { 00:26:16.506 "method": "sock_impl_set_options", 00:26:16.506 "params": { 00:26:16.506 "enable_ktls": false, 00:26:16.506 "enable_placement_id": 0, 00:26:16.506 "enable_quickack": false, 00:26:16.506 "enable_recv_pipe": true, 00:26:16.506 "enable_zerocopy_send_client": false, 00:26:16.506 "enable_zerocopy_send_server": true, 00:26:16.506 "impl_name": "posix", 00:26:16.506 "recv_buf_size": 2097152, 00:26:16.506 "send_buf_size": 2097152, 00:26:16.506 "tls_version": 0, 00:26:16.506 "zerocopy_threshold": 0 00:26:16.506 } 00:26:16.506 } 00:26:16.506 ] 00:26:16.506 }, 00:26:16.506 { 00:26:16.506 "subsystem": "vmd", 00:26:16.506 "config": [] 00:26:16.506 }, 00:26:16.506 { 00:26:16.506 "subsystem": "accel", 00:26:16.506 "config": [ 00:26:16.506 { 00:26:16.506 "method": "accel_set_options", 00:26:16.506 "params": { 00:26:16.506 "buf_count": 2048, 00:26:16.506 "large_cache_size": 16, 00:26:16.506 "sequence_count": 2048, 00:26:16.506 "small_cache_size": 128, 00:26:16.506 "task_count": 2048 00:26:16.506 } 00:26:16.506 } 00:26:16.506 ] 00:26:16.506 }, 00:26:16.506 { 00:26:16.506 "subsystem": "bdev", 00:26:16.506 "config": [ 00:26:16.506 { 00:26:16.506 "method": "bdev_set_options", 00:26:16.506 "params": { 00:26:16.506 "bdev_auto_examine": true, 00:26:16.506 "bdev_io_cache_size": 256, 00:26:16.506 "bdev_io_pool_size": 65535, 00:26:16.506 "bdev_io_stack_size": 4096, 00:26:16.506 "iobuf_large_cache_size": 16, 00:26:16.506 "iobuf_small_cache_size": 128 00:26:16.506 } 00:26:16.506 }, 00:26:16.506 { 00:26:16.506 "method": "bdev_raid_set_options", 00:26:16.506 "params": { 00:26:16.506 "process_max_bandwidth_mb_sec": 0, 00:26:16.506 "process_window_size_kb": 1024 00:26:16.506 } 00:26:16.506 }, 00:26:16.506 { 00:26:16.506 "method": "bdev_iscsi_set_options", 00:26:16.506 "params": { 00:26:16.506 "timeout_sec": 30 00:26:16.506 } 00:26:16.506 }, 00:26:16.506 { 00:26:16.506 "method": "bdev_nvme_set_options", 00:26:16.506 "params": { 00:26:16.506 "action_on_timeout": "none", 00:26:16.506 "allow_accel_sequence": false, 00:26:16.506 "arbitration_burst": 0, 00:26:16.506 "bdev_retry_count": 3, 00:26:16.506 "ctrlr_loss_timeout_sec": 0, 00:26:16.506 "delay_cmd_submit": true, 00:26:16.506 "dhchap_dhgroups": [ 00:26:16.506 "null", 00:26:16.506 "ffdhe2048", 00:26:16.506 "ffdhe3072", 00:26:16.506 "ffdhe4096", 00:26:16.506 "ffdhe6144", 00:26:16.506 "ffdhe8192" 00:26:16.506 ], 00:26:16.506 "dhchap_digests": [ 00:26:16.506 "sha256", 00:26:16.506 "sha384", 00:26:16.506 "sha512" 00:26:16.506 ], 00:26:16.506 "disable_auto_failback": false, 00:26:16.506 "fast_io_fail_timeout_sec": 0, 00:26:16.506 "generate_uuids": false, 00:26:16.506 "high_priority_weight": 0, 00:26:16.506 "io_path_stat": false, 00:26:16.506 "io_queue_requests": 0, 00:26:16.506 "keep_alive_timeout_ms": 10000, 00:26:16.506 "low_priority_weight": 0, 00:26:16.506 "medium_priority_weight": 0, 00:26:16.506 "nvme_adminq_poll_period_us": 10000, 00:26:16.506 "nvme_error_stat": false, 00:26:16.506 "nvme_ioq_poll_period_us": 0, 00:26:16.506 "rdma_cm_event_timeout_ms": 0, 00:26:16.506 "rdma_max_cq_size": 0, 00:26:16.506 "rdma_srq_size": 0, 00:26:16.506 "reconnect_delay_sec": 0, 00:26:16.506 "timeout_admin_us": 0, 00:26:16.506 "timeout_us": 0, 00:26:16.506 "transport_ack_timeout": 0, 00:26:16.506 "transport_retry_count": 4, 00:26:16.506 "transport_tos": 0 00:26:16.506 } 00:26:16.506 }, 00:26:16.506 { 00:26:16.506 "method": "bdev_nvme_set_hotplug", 00:26:16.506 "params": { 00:26:16.506 "enable": false, 00:26:16.506 "period_us": 100000 00:26:16.506 } 00:26:16.506 }, 00:26:16.506 { 00:26:16.506 "method": "bdev_wait_for_examine" 00:26:16.506 } 00:26:16.506 ] 00:26:16.506 }, 00:26:16.506 { 00:26:16.506 "subsystem": "scsi", 00:26:16.506 "config": null 00:26:16.506 }, 00:26:16.506 { 00:26:16.506 "subsystem": "scheduler", 00:26:16.506 "config": [ 00:26:16.506 { 00:26:16.506 "method": "framework_set_scheduler", 00:26:16.506 "params": { 00:26:16.506 "name": "static" 00:26:16.506 } 00:26:16.506 } 00:26:16.506 ] 00:26:16.506 }, 00:26:16.506 { 00:26:16.506 "subsystem": "vhost_scsi", 00:26:16.506 "config": [] 00:26:16.506 }, 00:26:16.506 { 00:26:16.506 "subsystem": "vhost_blk", 00:26:16.506 "config": [] 00:26:16.506 }, 00:26:16.506 { 00:26:16.506 "subsystem": "ublk", 00:26:16.506 "config": [] 00:26:16.506 }, 00:26:16.506 { 00:26:16.506 "subsystem": "nbd", 00:26:16.506 "config": [] 00:26:16.506 }, 00:26:16.506 { 00:26:16.506 "subsystem": "nvmf", 00:26:16.506 "config": [ 00:26:16.506 { 00:26:16.506 "method": "nvmf_set_config", 00:26:16.506 "params": { 00:26:16.506 "admin_cmd_passthru": { 00:26:16.506 "identify_ctrlr": false 00:26:16.506 }, 00:26:16.506 "dhchap_dhgroups": [ 00:26:16.506 "null", 00:26:16.506 "ffdhe2048", 00:26:16.506 "ffdhe3072", 00:26:16.506 "ffdhe4096", 00:26:16.506 "ffdhe6144", 00:26:16.507 "ffdhe8192" 00:26:16.507 ], 00:26:16.507 "dhchap_digests": [ 00:26:16.507 "sha256", 00:26:16.507 "sha384", 00:26:16.507 "sha512" 00:26:16.507 ], 00:26:16.507 "discovery_filter": "match_any" 00:26:16.507 } 00:26:16.507 }, 00:26:16.507 { 00:26:16.507 "method": "nvmf_set_max_subsystems", 00:26:16.507 "params": { 00:26:16.507 "max_subsystems": 1024 00:26:16.507 } 00:26:16.507 }, 00:26:16.507 { 00:26:16.507 "method": "nvmf_set_crdt", 00:26:16.507 "params": { 00:26:16.507 "crdt1": 0, 00:26:16.507 "crdt2": 0, 00:26:16.507 "crdt3": 0 00:26:16.507 } 00:26:16.507 }, 00:26:16.507 { 00:26:16.507 "method": "nvmf_create_transport", 00:26:16.507 "params": { 00:26:16.507 "abort_timeout_sec": 1, 00:26:16.507 "ack_timeout": 0, 00:26:16.507 "buf_cache_size": 4294967295, 00:26:16.507 "c2h_success": true, 00:26:16.507 "data_wr_pool_size": 0, 00:26:16.507 "dif_insert_or_strip": false, 00:26:16.507 "in_capsule_data_size": 4096, 00:26:16.507 "io_unit_size": 131072, 00:26:16.507 "max_aq_depth": 128, 00:26:16.507 "max_io_qpairs_per_ctrlr": 127, 00:26:16.507 "max_io_size": 131072, 00:26:16.507 "max_queue_depth": 128, 00:26:16.507 "num_shared_buffers": 511, 00:26:16.507 "sock_priority": 0, 00:26:16.507 "trtype": "TCP", 00:26:16.507 "zcopy": false 00:26:16.507 } 00:26:16.507 } 00:26:16.507 ] 00:26:16.507 }, 00:26:16.507 { 00:26:16.507 "subsystem": "iscsi", 00:26:16.507 "config": [ 00:26:16.507 { 00:26:16.507 "method": "iscsi_set_options", 00:26:16.507 "params": { 00:26:16.507 "allow_duplicated_isid": false, 00:26:16.507 "chap_group": 0, 00:26:16.507 "data_out_pool_size": 2048, 00:26:16.507 "default_time2retain": 20, 00:26:16.507 "default_time2wait": 2, 00:26:16.507 "disable_chap": false, 00:26:16.507 "error_recovery_level": 0, 00:26:16.507 "first_burst_length": 8192, 00:26:16.507 "immediate_data": true, 00:26:16.507 "immediate_data_pool_size": 16384, 00:26:16.507 "max_connections_per_session": 2, 00:26:16.507 "max_large_datain_per_connection": 64, 00:26:16.507 "max_queue_depth": 64, 00:26:16.507 "max_r2t_per_connection": 4, 00:26:16.507 "max_sessions": 128, 00:26:16.507 "mutual_chap": false, 00:26:16.507 "node_base": "iqn.2016-06.io.spdk", 00:26:16.507 "nop_in_interval": 30, 00:26:16.507 "nop_timeout": 60, 00:26:16.507 "pdu_pool_size": 36864, 00:26:16.507 "require_chap": false 00:26:16.507 } 00:26:16.507 } 00:26:16.507 ] 00:26:16.507 } 00:26:16.507 ] 00:26:16.507 } 00:26:16.507 20:25:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:26:16.507 20:25:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58967 00:26:16.507 20:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 58967 ']' 00:26:16.507 20:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 58967 00:26:16.507 20:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:26:16.507 20:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:16.507 20:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58967 00:26:16.766 20:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:16.766 killing process with pid 58967 00:26:16.766 20:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:16.766 20:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58967' 00:26:16.766 20:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 58967 00:26:16.766 20:25:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 58967 00:26:17.025 20:25:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58993 00:26:17.025 20:25:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:26:17.025 20:25:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:26:22.358 20:25:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58993 00:26:22.358 20:25:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 58993 ']' 00:26:22.358 20:25:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 58993 00:26:22.358 20:25:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:26:22.358 20:25:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:22.358 20:25:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58993 00:26:22.358 20:25:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:22.358 killing process with pid 58993 00:26:22.358 20:25:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:22.358 20:25:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58993' 00:26:22.358 20:25:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 58993 00:26:22.358 20:25:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 58993 00:26:22.358 20:25:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:26:22.358 20:25:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:26:22.358 00:26:22.358 real 0m6.431s 00:26:22.358 user 0m6.061s 00:26:22.358 sys 0m0.540s 00:26:22.358 20:25:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:22.358 20:25:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:26:22.358 ************************************ 00:26:22.358 END TEST skip_rpc_with_json 00:26:22.358 ************************************ 00:26:22.358 20:25:17 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:26:22.358 20:25:17 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:22.358 20:25:17 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:22.358 20:25:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:22.358 ************************************ 00:26:22.358 START TEST skip_rpc_with_delay 00:26:22.358 ************************************ 00:26:22.358 20:25:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:26:22.358 20:25:17 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:26:22.358 20:25:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:26:22.358 20:25:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:26:22.358 20:25:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:22.358 20:25:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:22.358 20:25:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:22.358 20:25:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:22.358 20:25:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:22.358 20:25:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:22.358 20:25:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:22.358 20:25:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:26:22.358 20:25:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:26:22.358 [2024-10-01 20:25:17.565625] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:26:22.358 [2024-10-01 20:25:17.565780] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:26:22.358 20:25:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:26:22.358 20:25:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:22.358 20:25:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:22.358 20:25:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:22.358 00:26:22.358 real 0m0.097s 00:26:22.358 user 0m0.075s 00:26:22.358 sys 0m0.022s 00:26:22.358 20:25:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:22.358 ************************************ 00:26:22.358 END TEST skip_rpc_with_delay 00:26:22.358 20:25:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:26:22.358 ************************************ 00:26:22.616 20:25:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:26:22.616 20:25:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:26:22.616 20:25:17 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:26:22.616 20:25:17 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:22.616 20:25:17 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:22.616 20:25:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:22.616 ************************************ 00:26:22.616 START TEST exit_on_failed_rpc_init 00:26:22.616 ************************************ 00:26:22.616 20:25:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:26:22.617 20:25:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59102 00:26:22.617 20:25:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59102 00:26:22.617 20:25:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 59102 ']' 00:26:22.617 20:25:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:26:22.617 20:25:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:22.617 20:25:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:22.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:22.617 20:25:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:22.617 20:25:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:22.617 20:25:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:26:22.617 [2024-10-01 20:25:17.699668] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:26:22.617 [2024-10-01 20:25:17.699755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59102 ] 00:26:22.617 [2024-10-01 20:25:17.831042] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.874 [2024-10-01 20:25:17.900282] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.133 20:25:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:23.133 20:25:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:26:23.133 20:25:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:26:23.133 20:25:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:26:23.133 20:25:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:26:23.133 20:25:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:26:23.133 20:25:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:23.133 20:25:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:23.133 20:25:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:23.133 20:25:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:23.133 20:25:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:23.133 20:25:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:23.133 20:25:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:23.133 20:25:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:26:23.133 20:25:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:26:23.133 [2024-10-01 20:25:18.205349] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:26:23.133 [2024-10-01 20:25:18.205453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59119 ] 00:26:23.133 [2024-10-01 20:25:18.339037] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.393 [2024-10-01 20:25:18.437186] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:23.393 [2024-10-01 20:25:18.437290] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:23.393 [2024-10-01 20:25:18.437309] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:23.393 [2024-10-01 20:25:18.437319] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:23.393 20:25:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:26:23.393 20:25:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:23.393 20:25:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:26:23.393 20:25:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:26:23.393 20:25:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:26:23.393 20:25:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:23.393 20:25:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:26:23.393 20:25:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59102 00:26:23.393 20:25:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 59102 ']' 00:26:23.393 20:25:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 59102 00:26:23.393 20:25:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:26:23.393 20:25:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:23.393 20:25:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59102 00:26:23.393 20:25:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:23.393 killing process with pid 59102 00:26:23.393 20:25:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:23.393 20:25:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59102' 00:26:23.393 20:25:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 59102 00:26:23.393 20:25:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 59102 00:26:23.653 00:26:23.653 real 0m1.242s 00:26:23.653 user 0m1.464s 00:26:23.653 sys 0m0.318s 00:26:23.653 ************************************ 00:26:23.653 END TEST exit_on_failed_rpc_init 00:26:23.653 ************************************ 00:26:23.653 20:25:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:23.653 20:25:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:26:23.911 20:25:18 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:26:23.911 00:26:23.911 real 0m13.503s 00:26:23.911 user 0m12.776s 00:26:23.911 sys 0m1.313s 00:26:23.911 20:25:18 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:23.911 20:25:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:23.911 ************************************ 00:26:23.911 END TEST skip_rpc 00:26:23.911 ************************************ 00:26:23.911 20:25:18 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:26:23.911 20:25:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:23.911 20:25:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:23.911 20:25:18 -- common/autotest_common.sh@10 -- # set +x 00:26:23.911 ************************************ 00:26:23.911 START TEST rpc_client 00:26:23.911 ************************************ 00:26:23.911 20:25:18 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:26:23.911 * Looking for test storage... 00:26:23.911 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:26:23.911 20:25:19 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:23.911 20:25:19 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:26:23.911 20:25:19 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:23.911 20:25:19 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:23.911 20:25:19 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:23.911 20:25:19 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:23.911 20:25:19 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:23.911 20:25:19 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:26:23.911 20:25:19 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:26:23.911 20:25:19 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:26:23.911 20:25:19 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:26:23.911 20:25:19 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:26:23.911 20:25:19 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:26:23.911 20:25:19 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:26:23.911 20:25:19 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:23.911 20:25:19 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:26:23.911 20:25:19 rpc_client -- scripts/common.sh@345 -- # : 1 00:26:23.911 20:25:19 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:23.911 20:25:19 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:23.911 20:25:19 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:26:23.911 20:25:19 rpc_client -- scripts/common.sh@353 -- # local d=1 00:26:23.911 20:25:19 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:23.911 20:25:19 rpc_client -- scripts/common.sh@355 -- # echo 1 00:26:23.911 20:25:19 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:26:23.911 20:25:19 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:26:24.174 20:25:19 rpc_client -- scripts/common.sh@353 -- # local d=2 00:26:24.174 20:25:19 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:24.174 20:25:19 rpc_client -- scripts/common.sh@355 -- # echo 2 00:26:24.174 20:25:19 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:26:24.174 20:25:19 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:24.174 20:25:19 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:24.174 20:25:19 rpc_client -- scripts/common.sh@368 -- # return 0 00:26:24.174 20:25:19 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:24.174 20:25:19 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:24.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:24.174 --rc genhtml_branch_coverage=1 00:26:24.174 --rc genhtml_function_coverage=1 00:26:24.174 --rc genhtml_legend=1 00:26:24.174 --rc geninfo_all_blocks=1 00:26:24.174 --rc geninfo_unexecuted_blocks=1 00:26:24.174 00:26:24.174 ' 00:26:24.174 20:25:19 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:24.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:24.174 --rc genhtml_branch_coverage=1 00:26:24.174 --rc genhtml_function_coverage=1 00:26:24.174 --rc genhtml_legend=1 00:26:24.174 --rc geninfo_all_blocks=1 00:26:24.174 --rc geninfo_unexecuted_blocks=1 00:26:24.174 00:26:24.174 ' 00:26:24.174 20:25:19 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:24.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:24.174 --rc genhtml_branch_coverage=1 00:26:24.174 --rc genhtml_function_coverage=1 00:26:24.174 --rc genhtml_legend=1 00:26:24.174 --rc geninfo_all_blocks=1 00:26:24.174 --rc geninfo_unexecuted_blocks=1 00:26:24.174 00:26:24.174 ' 00:26:24.174 20:25:19 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:24.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:24.174 --rc genhtml_branch_coverage=1 00:26:24.174 --rc genhtml_function_coverage=1 00:26:24.174 --rc genhtml_legend=1 00:26:24.174 --rc geninfo_all_blocks=1 00:26:24.174 --rc geninfo_unexecuted_blocks=1 00:26:24.174 00:26:24.174 ' 00:26:24.174 20:25:19 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:26:24.174 OK 00:26:24.174 20:25:19 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:26:24.174 00:26:24.174 real 0m0.227s 00:26:24.174 user 0m0.151s 00:26:24.174 sys 0m0.089s 00:26:24.174 20:25:19 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:24.174 ************************************ 00:26:24.174 END TEST rpc_client 00:26:24.174 ************************************ 00:26:24.174 20:25:19 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:26:24.175 20:25:19 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:26:24.175 20:25:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:24.175 20:25:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:24.175 20:25:19 -- common/autotest_common.sh@10 -- # set +x 00:26:24.175 ************************************ 00:26:24.175 START TEST json_config 00:26:24.175 ************************************ 00:26:24.175 20:25:19 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:26:24.175 20:25:19 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:24.175 20:25:19 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:26:24.175 20:25:19 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:24.175 20:25:19 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:24.175 20:25:19 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:24.175 20:25:19 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:24.175 20:25:19 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:24.175 20:25:19 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:26:24.175 20:25:19 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:26:24.175 20:25:19 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:26:24.175 20:25:19 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:26:24.175 20:25:19 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:26:24.175 20:25:19 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:26:24.175 20:25:19 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:26:24.175 20:25:19 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:24.175 20:25:19 json_config -- scripts/common.sh@344 -- # case "$op" in 00:26:24.175 20:25:19 json_config -- scripts/common.sh@345 -- # : 1 00:26:24.175 20:25:19 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:24.175 20:25:19 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:24.175 20:25:19 json_config -- scripts/common.sh@365 -- # decimal 1 00:26:24.175 20:25:19 json_config -- scripts/common.sh@353 -- # local d=1 00:26:24.175 20:25:19 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:24.175 20:25:19 json_config -- scripts/common.sh@355 -- # echo 1 00:26:24.175 20:25:19 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:26:24.175 20:25:19 json_config -- scripts/common.sh@366 -- # decimal 2 00:26:24.175 20:25:19 json_config -- scripts/common.sh@353 -- # local d=2 00:26:24.175 20:25:19 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:24.175 20:25:19 json_config -- scripts/common.sh@355 -- # echo 2 00:26:24.175 20:25:19 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:26:24.175 20:25:19 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:24.175 20:25:19 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:24.175 20:25:19 json_config -- scripts/common.sh@368 -- # return 0 00:26:24.175 20:25:19 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:24.175 20:25:19 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:24.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:24.175 --rc genhtml_branch_coverage=1 00:26:24.175 --rc genhtml_function_coverage=1 00:26:24.175 --rc genhtml_legend=1 00:26:24.175 --rc geninfo_all_blocks=1 00:26:24.175 --rc geninfo_unexecuted_blocks=1 00:26:24.175 00:26:24.175 ' 00:26:24.175 20:25:19 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:24.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:24.175 --rc genhtml_branch_coverage=1 00:26:24.175 --rc genhtml_function_coverage=1 00:26:24.175 --rc genhtml_legend=1 00:26:24.175 --rc geninfo_all_blocks=1 00:26:24.175 --rc geninfo_unexecuted_blocks=1 00:26:24.175 00:26:24.175 ' 00:26:24.175 20:25:19 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:24.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:24.175 --rc genhtml_branch_coverage=1 00:26:24.175 --rc genhtml_function_coverage=1 00:26:24.175 --rc genhtml_legend=1 00:26:24.175 --rc geninfo_all_blocks=1 00:26:24.175 --rc geninfo_unexecuted_blocks=1 00:26:24.175 00:26:24.175 ' 00:26:24.175 20:25:19 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:24.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:24.175 --rc genhtml_branch_coverage=1 00:26:24.175 --rc genhtml_function_coverage=1 00:26:24.175 --rc genhtml_legend=1 00:26:24.175 --rc geninfo_all_blocks=1 00:26:24.175 --rc geninfo_unexecuted_blocks=1 00:26:24.175 00:26:24.175 ' 00:26:24.175 20:25:19 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:24.175 20:25:19 json_config -- nvmf/common.sh@7 -- # uname -s 00:26:24.175 20:25:19 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:24.175 20:25:19 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:24.175 20:25:19 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:24.175 20:25:19 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:24.175 20:25:19 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:24.175 20:25:19 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:24.175 20:25:19 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:24.175 20:25:19 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:24.175 20:25:19 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:24.175 20:25:19 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:24.175 20:25:19 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:26:24.175 20:25:19 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:26:24.175 20:25:19 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:24.175 20:25:19 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:24.175 20:25:19 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:26:24.175 20:25:19 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:24.175 20:25:19 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:24.175 20:25:19 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:26:24.175 20:25:19 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:24.175 20:25:19 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:24.175 20:25:19 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:24.175 20:25:19 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.175 20:25:19 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.175 20:25:19 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.175 20:25:19 json_config -- paths/export.sh@5 -- # export PATH 00:26:24.175 20:25:19 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.175 20:25:19 json_config -- nvmf/common.sh@51 -- # : 0 00:26:24.175 20:25:19 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:24.175 20:25:19 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:24.175 20:25:19 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:24.175 20:25:19 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:24.175 20:25:19 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:24.175 20:25:19 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:24.175 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:24.175 20:25:19 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:24.175 20:25:19 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:24.175 20:25:19 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:24.175 20:25:19 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:26:24.175 20:25:19 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:26:24.175 20:25:19 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:26:24.175 20:25:19 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:26:24.175 20:25:19 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:26:24.175 20:25:19 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:26:24.175 20:25:19 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:26:24.175 20:25:19 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:26:24.175 20:25:19 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:26:24.175 20:25:19 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1536' ['initiator']='-m 0x2 -g -u -s 1024') 00:26:24.175 20:25:19 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:26:24.175 20:25:19 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:26:24.175 20:25:19 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:26:24.175 20:25:19 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:26:24.175 20:25:19 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:26:24.175 INFO: JSON configuration test init 00:26:24.175 20:25:19 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:26:24.175 20:25:19 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:26:24.175 20:25:19 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:26:24.175 20:25:19 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:24.175 20:25:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:26:24.434 20:25:19 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:26:24.434 20:25:19 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:24.434 20:25:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:26:24.434 20:25:19 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:26:24.434 20:25:19 json_config -- json_config/common.sh@9 -- # local app=target 00:26:24.434 20:25:19 json_config -- json_config/common.sh@10 -- # shift 00:26:24.434 20:25:19 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:26:24.434 20:25:19 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:26:24.434 20:25:19 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:26:24.434 20:25:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:26:24.434 20:25:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:26:24.434 20:25:19 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59258 00:26:24.434 Waiting for target to run... 00:26:24.434 20:25:19 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1536 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:26:24.434 20:25:19 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:26:24.434 20:25:19 json_config -- json_config/common.sh@25 -- # waitforlisten 59258 /var/tmp/spdk_tgt.sock 00:26:24.434 20:25:19 json_config -- common/autotest_common.sh@831 -- # '[' -z 59258 ']' 00:26:24.434 20:25:19 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:26:24.434 20:25:19 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:24.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:26:24.434 20:25:19 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:26:24.434 20:25:19 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:24.434 20:25:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:26:24.434 [2024-10-01 20:25:19.493661] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:26:24.434 [2024-10-01 20:25:19.494367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1536 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59258 ] 00:26:24.692 [2024-10-01 20:25:19.844180] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.692 [2024-10-01 20:25:19.900787] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.627 20:25:20 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:25.627 20:25:20 json_config -- common/autotest_common.sh@864 -- # return 0 00:26:25.627 00:26:25.627 20:25:20 json_config -- json_config/common.sh@26 -- # echo '' 00:26:25.627 20:25:20 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:26:25.627 20:25:20 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:26:25.627 20:25:20 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:25.627 20:25:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:26:25.627 20:25:20 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:26:25.627 20:25:20 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:26:25.627 20:25:20 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:25.627 20:25:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:26:25.627 20:25:20 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:26:25.627 20:25:20 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:26:25.627 20:25:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:26:26.194 20:25:21 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:26:26.194 20:25:21 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:26:26.194 20:25:21 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:26.194 20:25:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:26:26.194 20:25:21 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:26:26.194 20:25:21 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:26:26.194 20:25:21 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:26:26.194 20:25:21 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:26:26.194 20:25:21 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:26:26.194 20:25:21 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:26:26.194 20:25:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:26:26.194 20:25:21 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:26:26.452 20:25:21 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:26:26.452 20:25:21 json_config -- json_config/json_config.sh@51 -- # local get_types 00:26:26.452 20:25:21 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:26:26.452 20:25:21 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:26:26.452 20:25:21 json_config -- json_config/json_config.sh@54 -- # sort 00:26:26.452 20:25:21 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:26:26.452 20:25:21 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:26:26.452 20:25:21 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:26:26.452 20:25:21 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:26:26.452 20:25:21 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:26:26.452 20:25:21 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:26.452 20:25:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:26:26.452 20:25:21 json_config -- json_config/json_config.sh@62 -- # return 0 00:26:26.452 20:25:21 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:26:26.452 20:25:21 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:26:26.452 20:25:21 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:26:26.452 20:25:21 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:26:26.452 20:25:21 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:26:26.452 20:25:21 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:26:26.452 20:25:21 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:26.452 20:25:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:26:26.452 20:25:21 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:26:26.452 20:25:21 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:26:26.452 20:25:21 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:26:26.452 20:25:21 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:26:26.452 20:25:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:26:26.709 MallocForNvmf0 00:26:26.709 20:25:21 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:26:26.709 20:25:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:26:27.018 MallocForNvmf1 00:26:27.018 20:25:22 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:26:27.019 20:25:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:26:27.277 [2024-10-01 20:25:22.456971] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:27.277 20:25:22 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:27.277 20:25:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:27.536 20:25:22 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:26:27.536 20:25:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:26:28.103 20:25:23 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:26:28.103 20:25:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:26:28.362 20:25:23 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:26:28.362 20:25:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:26:28.622 [2024-10-01 20:25:23.769669] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:28.622 20:25:23 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:26:28.622 20:25:23 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:28.622 20:25:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:26:28.622 20:25:23 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:26:28.622 20:25:23 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:28.622 20:25:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:26:28.622 20:25:23 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:26:28.622 20:25:23 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:26:28.622 20:25:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:26:29.189 MallocBdevForConfigChangeCheck 00:26:29.189 20:25:24 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:26:29.189 20:25:24 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:29.189 20:25:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:26:29.189 20:25:24 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:26:29.189 20:25:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:26:29.447 INFO: shutting down applications... 00:26:29.447 20:25:24 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:26:29.447 20:25:24 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:26:29.447 20:25:24 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:26:29.447 20:25:24 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:26:29.447 20:25:24 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:26:30.013 Calling clear_iscsi_subsystem 00:26:30.013 Calling clear_nvmf_subsystem 00:26:30.013 Calling clear_nbd_subsystem 00:26:30.013 Calling clear_ublk_subsystem 00:26:30.013 Calling clear_vhost_blk_subsystem 00:26:30.013 Calling clear_vhost_scsi_subsystem 00:26:30.013 Calling clear_bdev_subsystem 00:26:30.013 20:25:25 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:26:30.013 20:25:25 json_config -- json_config/json_config.sh@350 -- # count=100 00:26:30.013 20:25:25 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:26:30.013 20:25:25 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:26:30.013 20:25:25 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:26:30.013 20:25:25 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:26:30.581 20:25:25 json_config -- json_config/json_config.sh@352 -- # break 00:26:30.581 20:25:25 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:26:30.581 20:25:25 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:26:30.581 20:25:25 json_config -- json_config/common.sh@31 -- # local app=target 00:26:30.581 20:25:25 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:26:30.581 20:25:25 json_config -- json_config/common.sh@35 -- # [[ -n 59258 ]] 00:26:30.581 20:25:25 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59258 00:26:30.581 20:25:25 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:26:30.581 20:25:25 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:26:30.581 20:25:25 json_config -- json_config/common.sh@41 -- # kill -0 59258 00:26:30.581 20:25:25 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:26:30.839 20:25:26 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:26:30.839 20:25:26 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:26:30.839 20:25:26 json_config -- json_config/common.sh@41 -- # kill -0 59258 00:26:30.839 20:25:26 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:26:30.839 20:25:26 json_config -- json_config/common.sh@43 -- # break 00:26:30.839 20:25:26 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:26:30.839 SPDK target shutdown done 00:26:30.839 20:25:26 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:26:30.840 INFO: relaunching applications... 00:26:30.840 20:25:26 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:26:30.840 20:25:26 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:26:30.840 20:25:26 json_config -- json_config/common.sh@9 -- # local app=target 00:26:30.840 20:25:26 json_config -- json_config/common.sh@10 -- # shift 00:26:30.840 20:25:26 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:26:30.840 20:25:26 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:26:30.840 20:25:26 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:26:30.840 20:25:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:26:30.840 20:25:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:26:30.840 20:25:26 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59549 00:26:30.840 Waiting for target to run... 00:26:30.840 20:25:26 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:26:30.840 20:25:26 json_config -- json_config/common.sh@25 -- # waitforlisten 59549 /var/tmp/spdk_tgt.sock 00:26:30.840 20:25:26 json_config -- common/autotest_common.sh@831 -- # '[' -z 59549 ']' 00:26:30.840 20:25:26 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1536 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:26:30.840 20:25:26 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:26:30.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:26:30.840 20:25:26 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:30.840 20:25:26 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:26:30.840 20:25:26 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:30.840 20:25:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:26:31.098 [2024-10-01 20:25:26.128940] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:26:31.098 [2024-10-01 20:25:26.129031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1536 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59549 ] 00:26:31.357 [2024-10-01 20:25:26.470635] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.357 [2024-10-01 20:25:26.516422] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.615 [2024-10-01 20:25:26.849452] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:31.874 [2024-10-01 20:25:26.881544] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:32.132 20:25:27 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:32.132 00:26:32.132 20:25:27 json_config -- common/autotest_common.sh@864 -- # return 0 00:26:32.132 20:25:27 json_config -- json_config/common.sh@26 -- # echo '' 00:26:32.132 20:25:27 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:26:32.132 INFO: Checking if target configuration is the same... 00:26:32.132 20:25:27 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:26:32.132 20:25:27 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:26:32.132 20:25:27 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:26:32.132 20:25:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:26:32.132 + '[' 2 -ne 2 ']' 00:26:32.132 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:26:32.132 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:26:32.132 + rootdir=/home/vagrant/spdk_repo/spdk 00:26:32.132 +++ basename /dev/fd/62 00:26:32.132 ++ mktemp /tmp/62.XXX 00:26:32.132 + tmp_file_1=/tmp/62.GBG 00:26:32.132 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:26:32.132 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:26:32.132 + tmp_file_2=/tmp/spdk_tgt_config.json.M1R 00:26:32.132 + ret=0 00:26:32.132 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:26:32.391 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:26:32.649 + diff -u /tmp/62.GBG /tmp/spdk_tgt_config.json.M1R 00:26:32.649 + echo 'INFO: JSON config files are the same' 00:26:32.649 INFO: JSON config files are the same 00:26:32.649 + rm /tmp/62.GBG /tmp/spdk_tgt_config.json.M1R 00:26:32.649 + exit 0 00:26:32.649 20:25:27 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:26:32.649 INFO: changing configuration and checking if this can be detected... 00:26:32.649 20:25:27 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:26:32.649 20:25:27 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:26:32.649 20:25:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:26:32.907 20:25:27 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:26:32.907 20:25:27 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:26:32.907 20:25:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:26:32.907 + '[' 2 -ne 2 ']' 00:26:32.907 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:26:32.907 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:26:32.907 + rootdir=/home/vagrant/spdk_repo/spdk 00:26:32.907 +++ basename /dev/fd/62 00:26:32.907 ++ mktemp /tmp/62.XXX 00:26:32.907 + tmp_file_1=/tmp/62.fy6 00:26:32.907 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:26:32.907 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:26:32.907 + tmp_file_2=/tmp/spdk_tgt_config.json.e7X 00:26:32.907 + ret=0 00:26:32.907 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:26:33.474 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:26:33.474 + diff -u /tmp/62.fy6 /tmp/spdk_tgt_config.json.e7X 00:26:33.474 + ret=1 00:26:33.474 + echo '=== Start of file: /tmp/62.fy6 ===' 00:26:33.474 + cat /tmp/62.fy6 00:26:33.474 + echo '=== End of file: /tmp/62.fy6 ===' 00:26:33.474 + echo '' 00:26:33.474 + echo '=== Start of file: /tmp/spdk_tgt_config.json.e7X ===' 00:26:33.474 + cat /tmp/spdk_tgt_config.json.e7X 00:26:33.474 + echo '=== End of file: /tmp/spdk_tgt_config.json.e7X ===' 00:26:33.474 + echo '' 00:26:33.474 + rm /tmp/62.fy6 /tmp/spdk_tgt_config.json.e7X 00:26:33.474 + exit 1 00:26:33.474 INFO: configuration change detected. 00:26:33.474 20:25:28 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:26:33.474 20:25:28 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:26:33.474 20:25:28 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:26:33.474 20:25:28 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:33.474 20:25:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:26:33.474 20:25:28 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:26:33.474 20:25:28 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:26:33.474 20:25:28 json_config -- json_config/json_config.sh@324 -- # [[ -n 59549 ]] 00:26:33.474 20:25:28 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:26:33.474 20:25:28 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:26:33.474 20:25:28 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:33.474 20:25:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:26:33.474 20:25:28 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:26:33.474 20:25:28 json_config -- json_config/json_config.sh@200 -- # uname -s 00:26:33.474 20:25:28 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:26:33.474 20:25:28 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:26:33.474 20:25:28 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:26:33.474 20:25:28 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:26:33.474 20:25:28 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:33.474 20:25:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:26:33.474 20:25:28 json_config -- json_config/json_config.sh@330 -- # killprocess 59549 00:26:33.474 20:25:28 json_config -- common/autotest_common.sh@950 -- # '[' -z 59549 ']' 00:26:33.474 20:25:28 json_config -- common/autotest_common.sh@954 -- # kill -0 59549 00:26:33.474 20:25:28 json_config -- common/autotest_common.sh@955 -- # uname 00:26:33.474 20:25:28 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:33.474 20:25:28 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59549 00:26:33.474 20:25:28 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:33.474 20:25:28 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:33.474 killing process with pid 59549 00:26:33.474 20:25:28 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59549' 00:26:33.474 20:25:28 json_config -- common/autotest_common.sh@969 -- # kill 59549 00:26:33.474 20:25:28 json_config -- common/autotest_common.sh@974 -- # wait 59549 00:26:33.733 20:25:28 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:26:33.733 20:25:28 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:26:33.733 20:25:28 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:33.733 20:25:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:26:33.733 20:25:28 json_config -- json_config/json_config.sh@335 -- # return 0 00:26:33.733 INFO: Success 00:26:33.733 20:25:28 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:26:33.733 00:26:33.733 real 0m9.569s 00:26:33.733 user 0m14.360s 00:26:33.733 sys 0m1.729s 00:26:33.733 20:25:28 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:33.733 20:25:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:26:33.733 ************************************ 00:26:33.733 END TEST json_config 00:26:33.733 ************************************ 00:26:33.733 20:25:28 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:26:33.733 20:25:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:33.733 20:25:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:33.733 20:25:28 -- common/autotest_common.sh@10 -- # set +x 00:26:33.733 ************************************ 00:26:33.733 START TEST json_config_extra_key 00:26:33.733 ************************************ 00:26:33.733 20:25:28 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:26:33.733 20:25:28 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:33.733 20:25:28 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:26:33.733 20:25:28 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:34.001 20:25:28 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:34.001 20:25:28 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:34.001 20:25:28 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:34.001 20:25:28 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:34.001 20:25:28 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:26:34.001 20:25:28 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:26:34.001 20:25:28 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:26:34.001 20:25:28 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:26:34.001 20:25:28 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:26:34.001 20:25:28 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:26:34.001 20:25:28 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:26:34.001 20:25:28 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:34.001 20:25:28 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:26:34.001 20:25:28 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:26:34.001 20:25:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:34.001 20:25:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:34.001 20:25:28 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:26:34.001 20:25:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:26:34.001 20:25:28 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:34.001 20:25:28 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:26:34.001 20:25:28 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:26:34.001 20:25:28 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:26:34.001 20:25:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:26:34.001 20:25:29 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:34.001 20:25:29 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:26:34.001 20:25:29 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:26:34.001 20:25:29 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:34.001 20:25:29 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:34.001 20:25:29 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:26:34.001 20:25:29 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:34.001 20:25:29 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:34.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.001 --rc genhtml_branch_coverage=1 00:26:34.001 --rc genhtml_function_coverage=1 00:26:34.001 --rc genhtml_legend=1 00:26:34.001 --rc geninfo_all_blocks=1 00:26:34.001 --rc geninfo_unexecuted_blocks=1 00:26:34.001 00:26:34.001 ' 00:26:34.001 20:25:29 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:34.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.001 --rc genhtml_branch_coverage=1 00:26:34.001 --rc genhtml_function_coverage=1 00:26:34.001 --rc genhtml_legend=1 00:26:34.001 --rc geninfo_all_blocks=1 00:26:34.001 --rc geninfo_unexecuted_blocks=1 00:26:34.001 00:26:34.001 ' 00:26:34.001 20:25:29 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:34.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.001 --rc genhtml_branch_coverage=1 00:26:34.001 --rc genhtml_function_coverage=1 00:26:34.001 --rc genhtml_legend=1 00:26:34.001 --rc geninfo_all_blocks=1 00:26:34.001 --rc geninfo_unexecuted_blocks=1 00:26:34.001 00:26:34.001 ' 00:26:34.001 20:25:29 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:34.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.001 --rc genhtml_branch_coverage=1 00:26:34.001 --rc genhtml_function_coverage=1 00:26:34.001 --rc genhtml_legend=1 00:26:34.001 --rc geninfo_all_blocks=1 00:26:34.001 --rc geninfo_unexecuted_blocks=1 00:26:34.001 00:26:34.001 ' 00:26:34.001 20:25:29 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:34.001 20:25:29 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:26:34.001 20:25:29 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:34.001 20:25:29 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:34.001 20:25:29 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:34.001 20:25:29 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:34.001 20:25:29 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:34.001 20:25:29 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:34.001 20:25:29 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:34.001 20:25:29 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:34.001 20:25:29 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:34.001 20:25:29 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:34.001 20:25:29 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:26:34.001 20:25:29 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:26:34.001 20:25:29 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:34.001 20:25:29 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:34.001 20:25:29 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:26:34.001 20:25:29 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:34.001 20:25:29 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:34.001 20:25:29 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:26:34.001 20:25:29 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:34.001 20:25:29 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:34.001 20:25:29 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:34.001 20:25:29 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.001 20:25:29 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.001 20:25:29 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.001 20:25:29 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:26:34.001 20:25:29 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.001 20:25:29 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:26:34.001 20:25:29 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:34.001 20:25:29 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:34.001 20:25:29 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:34.001 20:25:29 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:34.001 20:25:29 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:34.001 20:25:29 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:34.001 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:34.001 20:25:29 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:34.001 20:25:29 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:34.001 20:25:29 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:34.001 20:25:29 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:26:34.001 20:25:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:26:34.001 20:25:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:26:34.001 20:25:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:26:34.001 20:25:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:26:34.001 20:25:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1536') 00:26:34.001 20:25:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:26:34.001 20:25:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:26:34.001 20:25:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:26:34.001 20:25:29 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:26:34.001 INFO: launching applications... 00:26:34.001 20:25:29 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:26:34.001 20:25:29 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:26:34.002 20:25:29 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:26:34.002 20:25:29 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:26:34.002 20:25:29 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:26:34.002 20:25:29 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:26:34.002 20:25:29 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:26:34.002 20:25:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:26:34.002 20:25:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:26:34.002 20:25:29 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59733 00:26:34.002 Waiting for target to run... 00:26:34.002 20:25:29 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1536 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:26:34.002 20:25:29 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:26:34.002 20:25:29 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59733 /var/tmp/spdk_tgt.sock 00:26:34.002 20:25:29 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 59733 ']' 00:26:34.002 20:25:29 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:26:34.002 20:25:29 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:34.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:26:34.002 20:25:29 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:26:34.002 20:25:29 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:34.002 20:25:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:26:34.002 [2024-10-01 20:25:29.105710] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:26:34.002 [2024-10-01 20:25:29.105824] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1536 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59733 ] 00:26:34.259 [2024-10-01 20:25:29.461452] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.517 [2024-10-01 20:25:29.534696] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:35.085 20:25:30 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:35.085 20:25:30 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:26:35.085 00:26:35.085 20:25:30 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:26:35.085 INFO: shutting down applications... 00:26:35.085 20:25:30 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:26:35.085 20:25:30 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:26:35.085 20:25:30 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:26:35.085 20:25:30 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:26:35.085 20:25:30 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59733 ]] 00:26:35.085 20:25:30 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59733 00:26:35.085 20:25:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:26:35.085 20:25:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:26:35.085 20:25:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59733 00:26:35.085 20:25:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:26:35.675 20:25:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:26:35.675 20:25:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:26:35.675 20:25:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59733 00:26:35.675 20:25:30 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:26:35.675 20:25:30 json_config_extra_key -- json_config/common.sh@43 -- # break 00:26:35.675 20:25:30 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:26:35.675 SPDK target shutdown done 00:26:35.675 20:25:30 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:26:35.675 Success 00:26:35.675 20:25:30 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:26:35.675 00:26:35.675 real 0m1.814s 00:26:35.675 user 0m1.768s 00:26:35.675 sys 0m0.404s 00:26:35.675 20:25:30 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:35.675 20:25:30 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:26:35.675 ************************************ 00:26:35.675 END TEST json_config_extra_key 00:26:35.675 ************************************ 00:26:35.675 20:25:30 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:26:35.675 20:25:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:35.675 20:25:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:35.675 20:25:30 -- common/autotest_common.sh@10 -- # set +x 00:26:35.675 ************************************ 00:26:35.675 START TEST alias_rpc 00:26:35.675 ************************************ 00:26:35.675 20:25:30 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:26:35.675 * Looking for test storage... 00:26:35.675 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:26:35.675 20:25:30 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:35.675 20:25:30 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:26:35.675 20:25:30 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:35.675 20:25:30 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:35.675 20:25:30 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:35.675 20:25:30 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:35.675 20:25:30 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:35.675 20:25:30 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:26:35.675 20:25:30 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:26:35.675 20:25:30 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:26:35.675 20:25:30 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:26:35.675 20:25:30 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:26:35.675 20:25:30 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:26:35.675 20:25:30 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:26:35.675 20:25:30 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:35.675 20:25:30 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:26:35.675 20:25:30 alias_rpc -- scripts/common.sh@345 -- # : 1 00:26:35.675 20:25:30 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:35.675 20:25:30 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:35.675 20:25:30 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:26:35.675 20:25:30 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:26:35.675 20:25:30 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:35.675 20:25:30 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:26:35.675 20:25:30 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:35.675 20:25:30 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:26:35.675 20:25:30 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:26:35.675 20:25:30 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:35.675 20:25:30 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:26:35.675 20:25:30 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:35.675 20:25:30 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:35.675 20:25:30 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:35.675 20:25:30 alias_rpc -- scripts/common.sh@368 -- # return 0 00:26:35.675 20:25:30 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:35.675 20:25:30 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:35.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.675 --rc genhtml_branch_coverage=1 00:26:35.675 --rc genhtml_function_coverage=1 00:26:35.675 --rc genhtml_legend=1 00:26:35.675 --rc geninfo_all_blocks=1 00:26:35.675 --rc geninfo_unexecuted_blocks=1 00:26:35.675 00:26:35.675 ' 00:26:35.675 20:25:30 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:35.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.675 --rc genhtml_branch_coverage=1 00:26:35.675 --rc genhtml_function_coverage=1 00:26:35.675 --rc genhtml_legend=1 00:26:35.675 --rc geninfo_all_blocks=1 00:26:35.675 --rc geninfo_unexecuted_blocks=1 00:26:35.675 00:26:35.675 ' 00:26:35.675 20:25:30 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:35.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.675 --rc genhtml_branch_coverage=1 00:26:35.675 --rc genhtml_function_coverage=1 00:26:35.675 --rc genhtml_legend=1 00:26:35.675 --rc geninfo_all_blocks=1 00:26:35.675 --rc geninfo_unexecuted_blocks=1 00:26:35.675 00:26:35.675 ' 00:26:35.675 20:25:30 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:35.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.675 --rc genhtml_branch_coverage=1 00:26:35.675 --rc genhtml_function_coverage=1 00:26:35.675 --rc genhtml_legend=1 00:26:35.675 --rc geninfo_all_blocks=1 00:26:35.675 --rc geninfo_unexecuted_blocks=1 00:26:35.675 00:26:35.675 ' 00:26:35.675 20:25:30 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:26:35.675 20:25:30 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59817 00:26:35.675 20:25:30 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:35.675 20:25:30 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59817 00:26:35.675 20:25:30 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 59817 ']' 00:26:35.675 20:25:30 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:35.675 20:25:30 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:35.675 20:25:30 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:35.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:35.675 20:25:30 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:35.675 20:25:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:35.933 [2024-10-01 20:25:30.951909] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:26:35.933 [2024-10-01 20:25:30.952054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59817 ] 00:26:35.933 [2024-10-01 20:25:31.099680] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.933 [2024-10-01 20:25:31.173899] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.867 20:25:32 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:36.867 20:25:32 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:26:36.867 20:25:32 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:26:37.435 20:25:32 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59817 00:26:37.435 20:25:32 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 59817 ']' 00:26:37.435 20:25:32 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 59817 00:26:37.435 20:25:32 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:26:37.435 20:25:32 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:37.435 20:25:32 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59817 00:26:37.435 20:25:32 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:37.435 20:25:32 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:37.435 20:25:32 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59817' 00:26:37.435 killing process with pid 59817 00:26:37.435 20:25:32 alias_rpc -- common/autotest_common.sh@969 -- # kill 59817 00:26:37.435 20:25:32 alias_rpc -- common/autotest_common.sh@974 -- # wait 59817 00:26:37.694 00:26:37.694 real 0m2.090s 00:26:37.694 user 0m2.613s 00:26:37.694 sys 0m0.417s 00:26:37.694 20:25:32 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:37.694 20:25:32 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:37.694 ************************************ 00:26:37.694 END TEST alias_rpc 00:26:37.694 ************************************ 00:26:37.694 20:25:32 -- spdk/autotest.sh@163 -- # [[ 1 -eq 0 ]] 00:26:37.694 20:25:32 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:26:37.694 20:25:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:37.694 20:25:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:37.694 20:25:32 -- common/autotest_common.sh@10 -- # set +x 00:26:37.694 ************************************ 00:26:37.694 START TEST dpdk_mem_utility 00:26:37.694 ************************************ 00:26:37.694 20:25:32 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:26:37.694 * Looking for test storage... 00:26:37.694 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:26:37.694 20:25:32 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:37.694 20:25:32 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:26:37.694 20:25:32 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:37.953 20:25:33 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:37.953 20:25:33 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:37.953 20:25:33 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:37.954 20:25:33 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:37.954 20:25:33 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:26:37.954 20:25:33 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:26:37.954 20:25:33 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:26:37.954 20:25:33 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:26:37.954 20:25:33 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:26:37.954 20:25:33 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:26:37.954 20:25:33 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:26:37.954 20:25:33 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:37.954 20:25:33 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:26:37.954 20:25:33 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:26:37.954 20:25:33 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:37.954 20:25:33 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:37.954 20:25:33 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:26:37.954 20:25:33 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:26:37.954 20:25:33 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:37.954 20:25:33 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:26:37.954 20:25:33 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:26:37.954 20:25:33 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:26:37.954 20:25:33 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:26:37.954 20:25:33 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:37.954 20:25:33 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:26:37.954 20:25:33 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:26:37.954 20:25:33 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:37.954 20:25:33 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:37.954 20:25:33 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:26:37.954 20:25:33 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:37.954 20:25:33 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:37.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.954 --rc genhtml_branch_coverage=1 00:26:37.954 --rc genhtml_function_coverage=1 00:26:37.954 --rc genhtml_legend=1 00:26:37.954 --rc geninfo_all_blocks=1 00:26:37.954 --rc geninfo_unexecuted_blocks=1 00:26:37.954 00:26:37.954 ' 00:26:37.954 20:25:33 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:37.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.954 --rc genhtml_branch_coverage=1 00:26:37.954 --rc genhtml_function_coverage=1 00:26:37.954 --rc genhtml_legend=1 00:26:37.954 --rc geninfo_all_blocks=1 00:26:37.954 --rc geninfo_unexecuted_blocks=1 00:26:37.954 00:26:37.954 ' 00:26:37.954 20:25:33 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:37.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.954 --rc genhtml_branch_coverage=1 00:26:37.954 --rc genhtml_function_coverage=1 00:26:37.954 --rc genhtml_legend=1 00:26:37.954 --rc geninfo_all_blocks=1 00:26:37.954 --rc geninfo_unexecuted_blocks=1 00:26:37.954 00:26:37.954 ' 00:26:37.954 20:25:33 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:37.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.954 --rc genhtml_branch_coverage=1 00:26:37.954 --rc genhtml_function_coverage=1 00:26:37.954 --rc genhtml_legend=1 00:26:37.954 --rc geninfo_all_blocks=1 00:26:37.954 --rc geninfo_unexecuted_blocks=1 00:26:37.954 00:26:37.954 ' 00:26:37.954 20:25:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:26:37.954 20:25:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59923 00:26:37.954 20:25:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:37.954 20:25:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59923 00:26:37.954 20:25:33 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 59923 ']' 00:26:37.954 20:25:33 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:37.954 20:25:33 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:37.954 20:25:33 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:37.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:37.954 20:25:33 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:37.954 20:25:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:26:37.954 [2024-10-01 20:25:33.094007] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:26:37.954 [2024-10-01 20:25:33.094167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59923 ] 00:26:38.236 [2024-10-01 20:25:33.234585] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.236 [2024-10-01 20:25:33.304756] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.507 20:25:33 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:38.507 20:25:33 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:26:38.507 20:25:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:26:38.507 20:25:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:26:38.507 20:25:33 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.507 20:25:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:26:38.507 { 00:26:38.507 "filename": "/tmp/spdk_mem_dump.txt" 00:26:38.507 } 00:26:38.507 20:25:33 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.507 20:25:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:26:38.507 DPDK memory size 1100.000000 MiB in 1 heap(s) 00:26:38.507 1 heaps totaling size 1100.000000 MiB 00:26:38.507 size: 1100.000000 MiB heap id: 0 00:26:38.507 end heaps---------- 00:26:38.507 9 mempools totaling size 883.273621 MiB 00:26:38.507 size: 333.169250 MiB name: bdev_io_59923 00:26:38.507 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:26:38.507 size: 158.602051 MiB name: PDU_data_out_Pool 00:26:38.507 size: 51.011292 MiB name: evtpool_59923 00:26:38.507 size: 50.003479 MiB name: msgpool_59923 00:26:38.507 size: 36.509338 MiB name: fsdev_io_59923 00:26:38.507 size: 21.763794 MiB name: PDU_Pool 00:26:38.507 size: 19.513306 MiB name: SCSI_TASK_Pool 00:26:38.507 size: 0.026123 MiB name: Session_Pool 00:26:38.507 end mempools------- 00:26:38.507 6 memzones totaling size 4.142822 MiB 00:26:38.507 size: 1.000366 MiB name: RG_ring_0_59923 00:26:38.507 size: 1.000366 MiB name: RG_ring_1_59923 00:26:38.507 size: 1.000366 MiB name: RG_ring_4_59923 00:26:38.507 size: 1.000366 MiB name: RG_ring_5_59923 00:26:38.507 size: 0.125366 MiB name: RG_ring_2_59923 00:26:38.507 size: 0.015991 MiB name: RG_ring_3_59923 00:26:38.507 end memzones------- 00:26:38.507 20:25:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:26:38.507 heap id: 0 total size: 1100.000000 MiB number of busy elements: 379 number of free elements: 16 00:26:38.507 list of free elements. size: 13.299561 MiB 00:26:38.507 element at address: 0x200000400000 with size: 1.999512 MiB 00:26:38.507 element at address: 0x200000800000 with size: 1.996948 MiB 00:26:38.507 element at address: 0x20002ac00000 with size: 0.999878 MiB 00:26:38.507 element at address: 0x20002ae00000 with size: 0.999878 MiB 00:26:38.507 element at address: 0x200043a00000 with size: 0.994446 MiB 00:26:38.507 element at address: 0x200009600000 with size: 0.959839 MiB 00:26:38.507 element at address: 0x20002b000000 with size: 0.936584 MiB 00:26:38.507 element at address: 0x200000200000 with size: 0.837219 MiB 00:26:38.507 element at address: 0x20002c800000 with size: 0.573181 MiB 00:26:38.507 element at address: 0x20000d800000 with size: 0.488892 MiB 00:26:38.507 element at address: 0x20002b200000 with size: 0.485657 MiB 00:26:38.507 element at address: 0x200007000000 with size: 0.480286 MiB 00:26:38.507 element at address: 0x200003e00000 with size: 0.465942 MiB 00:26:38.507 element at address: 0x200039c00000 with size: 0.397583 MiB 00:26:38.507 element at address: 0x200003a00000 with size: 0.353210 MiB 00:26:38.507 element at address: 0x200015e00000 with size: 0.330505 MiB 00:26:38.507 list of standard malloc elements. size: 199.279968 MiB 00:26:38.507 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:26:38.507 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:26:38.507 element at address: 0x20002acfff80 with size: 1.000122 MiB 00:26:38.507 element at address: 0x20002aefff80 with size: 1.000122 MiB 00:26:38.507 element at address: 0x20002b0fff80 with size: 1.000122 MiB 00:26:38.507 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:26:38.507 element at address: 0x20002b0eff00 with size: 0.062622 MiB 00:26:38.507 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:26:38.507 element at address: 0x20002b0efdc0 with size: 0.000305 MiB 00:26:38.507 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:26:38.507 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:26:38.507 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:26:38.507 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:26:38.507 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:26:38.507 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:26:38.507 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:26:38.507 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:26:38.507 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:26:38.508 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:26:38.508 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:26:38.508 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:26:38.508 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:26:38.508 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:26:38.508 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:26:38.508 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:26:38.508 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:26:38.508 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:26:38.508 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:26:38.508 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:26:38.508 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:26:38.508 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:26:38.508 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:26:38.508 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:26:38.508 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:26:38.508 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:26:38.508 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:26:38.508 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:26:38.508 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:26:38.508 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:26:38.508 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003a5a6c0 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003a5eb80 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003aff880 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003affa80 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003affb40 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e77480 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e77540 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e77600 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e776c0 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e77780 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e77840 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e77900 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e779c0 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e77a80 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e77b40 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e77c00 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e77cc0 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e77d80 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e77e40 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e77f00 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e77fc0 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e78080 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e78140 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e78200 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e782c0 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e78380 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e78440 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e78500 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e785c0 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e78680 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e78740 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e78800 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e788c0 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e78980 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e78a40 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e78b00 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e78bc0 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e78c80 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e78d40 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e78e00 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e78ec0 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e78f80 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e79040 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e79100 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e791c0 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e79280 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e79340 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e79400 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e794c0 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e79580 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e79640 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e79700 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e797c0 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e79880 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e79940 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e79a00 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e79ac0 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e79b80 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e79c40 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e79d00 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e79dc0 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e79e80 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e79f40 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7a000 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7a0c0 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7a180 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7a240 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7a300 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7a3c0 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7a480 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7a540 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7a600 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7a6c0 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7a780 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7a840 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7a900 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7a9c0 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7aa80 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7ab40 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7ac00 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7acc0 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7ad80 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7ae40 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7af00 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7afc0 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7b080 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7b140 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7b200 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7b2c0 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7b380 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7b440 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7b500 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7b5c0 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7b680 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7b740 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7b800 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7b8c0 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7b980 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7ba40 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7bb00 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7bbc0 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7bc80 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7bd40 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7be00 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7bec0 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7bf80 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7c040 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7c100 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7c1c0 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7c280 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7c340 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7c400 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7c4c0 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7c580 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7c640 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7c700 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7c7c0 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7c880 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7c940 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7ca00 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7cac0 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7cb80 with size: 0.000183 MiB 00:26:38.508 element at address: 0x200003e7cc40 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7cd00 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7cdc0 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7ce80 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7cf40 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7d000 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7d0c0 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20000707af40 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20000707b000 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20000707b180 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20000707b240 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20000707b300 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20000707b480 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20000707b540 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20000707b600 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:26:38.509 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:26:38.509 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20000d87d280 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20000d87d340 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:26:38.509 element at address: 0x200015e549c0 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002b0efc40 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002b0efd00 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002b2bc740 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c892bc0 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c892c80 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c892d40 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c892e00 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c892ec0 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c892f80 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c893040 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c893100 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c8931c0 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c893280 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c893340 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c893400 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c8934c0 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c893580 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c893640 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c893700 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c8937c0 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c893880 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c893940 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c893a00 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c893ac0 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c893b80 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c893c40 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c893d00 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c893dc0 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c893e80 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c893f40 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c894000 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c8940c0 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c894180 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c894240 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c894300 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c8943c0 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c894480 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c894540 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c894600 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c8946c0 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c894780 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c894840 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c894900 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c8949c0 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c894a80 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c894b40 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c894c00 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c894cc0 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c894d80 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c894e40 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c894f00 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c894fc0 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c895080 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c895140 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c895200 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c8952c0 with size: 0.000183 MiB 00:26:38.509 element at address: 0x20002c895380 with size: 0.000183 MiB 00:26:38.510 element at address: 0x20002c895440 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c65c80 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c65d40 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6c940 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6cb40 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6cc00 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6ccc0 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6cd80 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6ce40 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6cf00 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6cfc0 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6d080 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6d140 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6d200 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6d2c0 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6d380 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6d440 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6d500 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6d5c0 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6d680 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6d740 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6d800 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6d8c0 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6d980 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6da40 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6db00 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6dbc0 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6dc80 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6dd40 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6de00 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6dec0 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6df80 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6e040 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6e100 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6e1c0 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6e280 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6e340 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6e400 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6e4c0 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6e580 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6e640 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6e700 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6e7c0 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6e880 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6e940 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6ea00 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6eac0 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6eb80 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6ec40 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6ed00 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6edc0 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6ee80 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6ef40 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6f000 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6f0c0 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6f180 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6f240 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6f300 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6f3c0 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6f480 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6f540 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6f600 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6f6c0 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6f780 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6f840 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6f900 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6f9c0 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6fa80 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6fb40 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6fc00 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6fcc0 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6fd80 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6fe40 with size: 0.000183 MiB 00:26:38.510 element at address: 0x200039c6ff00 with size: 0.000183 MiB 00:26:38.510 list of memzone associated elements. size: 887.420471 MiB 00:26:38.510 element at address: 0x200015f54c80 with size: 332.668823 MiB 00:26:38.510 associated memzone info: size: 332.668701 MiB name: MP_bdev_io_59923_0 00:26:38.510 element at address: 0x20002c895500 with size: 211.416748 MiB 00:26:38.510 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:26:38.510 element at address: 0x200039c6ffc0 with size: 157.562561 MiB 00:26:38.510 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:26:38.510 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:26:38.510 associated memzone info: size: 48.002930 MiB name: MP_evtpool_59923_0 00:26:38.510 element at address: 0x200003fff380 with size: 48.003052 MiB 00:26:38.510 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59923_0 00:26:38.510 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:26:38.510 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59923_0 00:26:38.510 element at address: 0x20002b3be940 with size: 20.255554 MiB 00:26:38.510 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:26:38.510 element at address: 0x200043bfeb40 with size: 18.005066 MiB 00:26:38.510 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:26:38.510 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:26:38.510 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_59923 00:26:38.510 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:26:38.510 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59923 00:26:38.510 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:26:38.510 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59923 00:26:38.510 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:26:38.510 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:26:38.510 element at address: 0x20002b2bc800 with size: 1.008118 MiB 00:26:38.510 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:26:38.510 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:26:38.510 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:26:38.510 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:26:38.510 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:26:38.510 element at address: 0x200003eff180 with size: 1.000488 MiB 00:26:38.510 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59923 00:26:38.510 element at address: 0x200003affc00 with size: 1.000488 MiB 00:26:38.510 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59923 00:26:38.510 element at address: 0x200015e54a80 with size: 1.000488 MiB 00:26:38.510 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59923 00:26:38.510 element at address: 0x200043afe940 with size: 1.000488 MiB 00:26:38.510 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59923 00:26:38.510 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:26:38.510 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59923 00:26:38.510 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:26:38.510 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59923 00:26:38.510 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:26:38.510 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:26:38.510 element at address: 0x20000707b780 with size: 0.500488 MiB 00:26:38.510 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:26:38.510 element at address: 0x20002b27c540 with size: 0.250488 MiB 00:26:38.510 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:26:38.510 element at address: 0x200003a5ec40 with size: 0.125488 MiB 00:26:38.510 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59923 00:26:38.510 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:26:38.510 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:26:38.510 element at address: 0x200039c65e00 with size: 0.023743 MiB 00:26:38.510 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:26:38.510 element at address: 0x200003a5a980 with size: 0.016113 MiB 00:26:38.510 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59923 00:26:38.510 element at address: 0x200039c6bf40 with size: 0.002441 MiB 00:26:38.510 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:26:38.510 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:26:38.510 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59923 00:26:38.510 element at address: 0x200003aff940 with size: 0.000305 MiB 00:26:38.510 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59923 00:26:38.510 element at address: 0x200003a5a780 with size: 0.000305 MiB 00:26:38.510 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59923 00:26:38.510 element at address: 0x200039c6ca00 with size: 0.000305 MiB 00:26:38.510 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:26:38.510 20:25:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:26:38.510 20:25:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59923 00:26:38.510 20:25:33 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 59923 ']' 00:26:38.510 20:25:33 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 59923 00:26:38.510 20:25:33 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:26:38.510 20:25:33 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:38.510 20:25:33 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59923 00:26:38.769 20:25:33 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:38.769 killing process with pid 59923 00:26:38.769 20:25:33 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:38.769 20:25:33 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59923' 00:26:38.769 20:25:33 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 59923 00:26:38.769 20:25:33 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 59923 00:26:39.028 00:26:39.028 real 0m1.234s 00:26:39.028 user 0m1.265s 00:26:39.028 sys 0m0.388s 00:26:39.028 20:25:34 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:39.028 20:25:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:26:39.028 ************************************ 00:26:39.028 END TEST dpdk_mem_utility 00:26:39.028 ************************************ 00:26:39.028 20:25:34 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:26:39.028 20:25:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:39.028 20:25:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:39.028 20:25:34 -- common/autotest_common.sh@10 -- # set +x 00:26:39.028 ************************************ 00:26:39.028 START TEST event 00:26:39.028 ************************************ 00:26:39.028 20:25:34 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:26:39.028 * Looking for test storage... 00:26:39.028 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:26:39.028 20:25:34 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:39.028 20:25:34 event -- common/autotest_common.sh@1681 -- # lcov --version 00:26:39.028 20:25:34 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:39.287 20:25:34 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:39.287 20:25:34 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:39.287 20:25:34 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:39.287 20:25:34 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:39.287 20:25:34 event -- scripts/common.sh@336 -- # IFS=.-: 00:26:39.287 20:25:34 event -- scripts/common.sh@336 -- # read -ra ver1 00:26:39.287 20:25:34 event -- scripts/common.sh@337 -- # IFS=.-: 00:26:39.287 20:25:34 event -- scripts/common.sh@337 -- # read -ra ver2 00:26:39.287 20:25:34 event -- scripts/common.sh@338 -- # local 'op=<' 00:26:39.287 20:25:34 event -- scripts/common.sh@340 -- # ver1_l=2 00:26:39.287 20:25:34 event -- scripts/common.sh@341 -- # ver2_l=1 00:26:39.287 20:25:34 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:39.287 20:25:34 event -- scripts/common.sh@344 -- # case "$op" in 00:26:39.287 20:25:34 event -- scripts/common.sh@345 -- # : 1 00:26:39.287 20:25:34 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:39.287 20:25:34 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:39.287 20:25:34 event -- scripts/common.sh@365 -- # decimal 1 00:26:39.287 20:25:34 event -- scripts/common.sh@353 -- # local d=1 00:26:39.287 20:25:34 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:39.287 20:25:34 event -- scripts/common.sh@355 -- # echo 1 00:26:39.287 20:25:34 event -- scripts/common.sh@365 -- # ver1[v]=1 00:26:39.287 20:25:34 event -- scripts/common.sh@366 -- # decimal 2 00:26:39.287 20:25:34 event -- scripts/common.sh@353 -- # local d=2 00:26:39.287 20:25:34 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:39.287 20:25:34 event -- scripts/common.sh@355 -- # echo 2 00:26:39.287 20:25:34 event -- scripts/common.sh@366 -- # ver2[v]=2 00:26:39.287 20:25:34 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:39.287 20:25:34 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:39.287 20:25:34 event -- scripts/common.sh@368 -- # return 0 00:26:39.287 20:25:34 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:39.287 20:25:34 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:39.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.287 --rc genhtml_branch_coverage=1 00:26:39.287 --rc genhtml_function_coverage=1 00:26:39.287 --rc genhtml_legend=1 00:26:39.287 --rc geninfo_all_blocks=1 00:26:39.287 --rc geninfo_unexecuted_blocks=1 00:26:39.287 00:26:39.287 ' 00:26:39.287 20:25:34 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:39.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.287 --rc genhtml_branch_coverage=1 00:26:39.287 --rc genhtml_function_coverage=1 00:26:39.287 --rc genhtml_legend=1 00:26:39.287 --rc geninfo_all_blocks=1 00:26:39.287 --rc geninfo_unexecuted_blocks=1 00:26:39.287 00:26:39.287 ' 00:26:39.287 20:25:34 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:39.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.287 --rc genhtml_branch_coverage=1 00:26:39.287 --rc genhtml_function_coverage=1 00:26:39.287 --rc genhtml_legend=1 00:26:39.287 --rc geninfo_all_blocks=1 00:26:39.287 --rc geninfo_unexecuted_blocks=1 00:26:39.287 00:26:39.287 ' 00:26:39.287 20:25:34 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:39.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.287 --rc genhtml_branch_coverage=1 00:26:39.287 --rc genhtml_function_coverage=1 00:26:39.287 --rc genhtml_legend=1 00:26:39.287 --rc geninfo_all_blocks=1 00:26:39.287 --rc geninfo_unexecuted_blocks=1 00:26:39.287 00:26:39.287 ' 00:26:39.287 20:25:34 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:26:39.287 20:25:34 event -- bdev/nbd_common.sh@6 -- # set -e 00:26:39.287 20:25:34 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:26:39.287 20:25:34 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:26:39.287 20:25:34 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:39.287 20:25:34 event -- common/autotest_common.sh@10 -- # set +x 00:26:39.287 ************************************ 00:26:39.287 START TEST event_perf 00:26:39.287 ************************************ 00:26:39.287 20:25:34 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:26:39.287 Running I/O for 1 seconds...[2024-10-01 20:25:34.375226] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:26:39.287 [2024-10-01 20:25:34.375327] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60007 ] 00:26:39.287 [2024-10-01 20:25:34.509101] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:39.546 [2024-10-01 20:25:34.572017] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.546 [2024-10-01 20:25:34.572100] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:39.546 [2024-10-01 20:25:34.572216] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:26:39.546 Running I/O for 1 seconds...[2024-10-01 20:25:34.572529] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.479 00:26:40.479 lcore 0: 187801 00:26:40.479 lcore 1: 187799 00:26:40.479 lcore 2: 187800 00:26:40.479 lcore 3: 187801 00:26:40.479 done. 00:26:40.479 00:26:40.479 real 0m1.289s 00:26:40.479 user 0m4.112s 00:26:40.479 sys 0m0.053s 00:26:40.479 20:25:35 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:40.479 20:25:35 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:26:40.479 ************************************ 00:26:40.479 END TEST event_perf 00:26:40.479 ************************************ 00:26:40.479 20:25:35 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:26:40.479 20:25:35 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:26:40.479 20:25:35 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:40.479 20:25:35 event -- common/autotest_common.sh@10 -- # set +x 00:26:40.479 ************************************ 00:26:40.479 START TEST event_reactor 00:26:40.479 ************************************ 00:26:40.479 20:25:35 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:26:40.479 [2024-10-01 20:25:35.718199] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:26:40.479 [2024-10-01 20:25:35.718292] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60040 ] 00:26:40.736 [2024-10-01 20:25:35.858664] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.736 [2024-10-01 20:25:35.929273] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.136 test_start 00:26:42.136 oneshot 00:26:42.136 tick 100 00:26:42.136 tick 100 00:26:42.136 tick 250 00:26:42.136 tick 100 00:26:42.136 tick 100 00:26:42.136 tick 100 00:26:42.136 tick 250 00:26:42.136 tick 500 00:26:42.136 tick 100 00:26:42.136 tick 100 00:26:42.136 tick 250 00:26:42.136 tick 100 00:26:42.136 tick 100 00:26:42.136 test_end 00:26:42.136 00:26:42.136 real 0m1.300s 00:26:42.136 user 0m1.156s 00:26:42.136 sys 0m0.037s 00:26:42.136 20:25:36 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:42.136 20:25:36 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:26:42.136 ************************************ 00:26:42.136 END TEST event_reactor 00:26:42.136 ************************************ 00:26:42.136 20:25:37 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:26:42.136 20:25:37 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:26:42.136 20:25:37 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:42.136 20:25:37 event -- common/autotest_common.sh@10 -- # set +x 00:26:42.136 ************************************ 00:26:42.136 START TEST event_reactor_perf 00:26:42.136 ************************************ 00:26:42.136 20:25:37 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:26:42.136 [2024-10-01 20:25:37.063137] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:26:42.136 [2024-10-01 20:25:37.063225] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60075 ] 00:26:42.136 [2024-10-01 20:25:37.198960] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.136 [2024-10-01 20:25:37.261124] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:43.076 test_start 00:26:43.077 test_end 00:26:43.077 Performance: 362265 events per second 00:26:43.335 00:26:43.335 real 0m1.283s 00:26:43.335 user 0m1.140s 00:26:43.335 sys 0m0.036s 00:26:43.335 20:25:38 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:43.335 20:25:38 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:26:43.335 ************************************ 00:26:43.335 END TEST event_reactor_perf 00:26:43.335 ************************************ 00:26:43.335 20:25:38 event -- event/event.sh@49 -- # uname -s 00:26:43.335 20:25:38 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:26:43.335 20:25:38 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:26:43.335 20:25:38 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:43.335 20:25:38 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:43.335 20:25:38 event -- common/autotest_common.sh@10 -- # set +x 00:26:43.335 ************************************ 00:26:43.335 START TEST event_scheduler 00:26:43.335 ************************************ 00:26:43.335 20:25:38 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:26:43.335 * Looking for test storage... 00:26:43.335 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:26:43.335 20:25:38 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:43.335 20:25:38 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:43.335 20:25:38 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:26:43.335 20:25:38 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:43.335 20:25:38 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:43.335 20:25:38 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:43.335 20:25:38 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:43.335 20:25:38 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:26:43.335 20:25:38 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:26:43.335 20:25:38 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:26:43.335 20:25:38 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:26:43.335 20:25:38 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:26:43.335 20:25:38 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:26:43.335 20:25:38 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:26:43.335 20:25:38 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:43.335 20:25:38 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:26:43.335 20:25:38 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:26:43.335 20:25:38 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:43.335 20:25:38 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:43.335 20:25:38 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:26:43.335 20:25:38 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:26:43.335 20:25:38 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:43.335 20:25:38 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:26:43.335 20:25:38 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:26:43.335 20:25:38 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:26:43.335 20:25:38 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:26:43.335 20:25:38 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:43.335 20:25:38 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:26:43.335 20:25:38 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:26:43.335 20:25:38 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:43.335 20:25:38 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:43.335 20:25:38 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:26:43.335 20:25:38 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:43.335 20:25:38 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:43.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:43.335 --rc genhtml_branch_coverage=1 00:26:43.335 --rc genhtml_function_coverage=1 00:26:43.335 --rc genhtml_legend=1 00:26:43.335 --rc geninfo_all_blocks=1 00:26:43.335 --rc geninfo_unexecuted_blocks=1 00:26:43.335 00:26:43.335 ' 00:26:43.335 20:25:38 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:43.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:43.335 --rc genhtml_branch_coverage=1 00:26:43.335 --rc genhtml_function_coverage=1 00:26:43.335 --rc genhtml_legend=1 00:26:43.335 --rc geninfo_all_blocks=1 00:26:43.335 --rc geninfo_unexecuted_blocks=1 00:26:43.335 00:26:43.335 ' 00:26:43.335 20:25:38 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:43.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:43.335 --rc genhtml_branch_coverage=1 00:26:43.335 --rc genhtml_function_coverage=1 00:26:43.335 --rc genhtml_legend=1 00:26:43.335 --rc geninfo_all_blocks=1 00:26:43.335 --rc geninfo_unexecuted_blocks=1 00:26:43.335 00:26:43.335 ' 00:26:43.335 20:25:38 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:43.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:43.335 --rc genhtml_branch_coverage=1 00:26:43.335 --rc genhtml_function_coverage=1 00:26:43.335 --rc genhtml_legend=1 00:26:43.335 --rc geninfo_all_blocks=1 00:26:43.335 --rc geninfo_unexecuted_blocks=1 00:26:43.335 00:26:43.335 ' 00:26:43.335 20:25:38 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:26:43.335 20:25:38 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60145 00:26:43.335 20:25:38 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:26:43.335 20:25:38 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:26:43.335 20:25:38 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60145 00:26:43.335 20:25:38 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 60145 ']' 00:26:43.335 20:25:38 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:43.335 20:25:38 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:43.335 20:25:38 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:43.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:43.335 20:25:38 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:43.335 20:25:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:26:43.593 [2024-10-01 20:25:38.622033] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:26:43.593 [2024-10-01 20:25:38.622166] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60145 ] 00:26:43.593 [2024-10-01 20:25:38.763457] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:43.593 [2024-10-01 20:25:38.826558] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:43.593 [2024-10-01 20:25:38.826697] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:43.593 [2024-10-01 20:25:38.826835] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:26:43.593 [2024-10-01 20:25:38.826842] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:43.851 20:25:38 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:43.851 20:25:38 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:26:43.851 20:25:38 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:26:43.851 20:25:38 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.851 20:25:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:26:43.852 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:26:43.852 POWER: Cannot set governor of lcore 0 to userspace 00:26:43.852 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:26:43.852 POWER: Cannot set governor of lcore 0 to performance 00:26:43.852 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:26:43.852 POWER: Cannot set governor of lcore 0 to userspace 00:26:43.852 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:26:43.852 POWER: Cannot set governor of lcore 0 to userspace 00:26:43.852 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:26:43.852 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:26:43.852 POWER: Unable to set Power Management Environment for lcore 0 00:26:43.852 [2024-10-01 20:25:38.920715] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:26:43.852 [2024-10-01 20:25:38.920736] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:26:43.852 [2024-10-01 20:25:38.920751] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:26:43.852 [2024-10-01 20:25:38.920770] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:26:43.852 [2024-10-01 20:25:38.920785] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:26:43.852 [2024-10-01 20:25:38.920798] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:26:43.852 20:25:38 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.852 20:25:38 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:26:43.852 20:25:38 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.852 20:25:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:26:43.852 [2024-10-01 20:25:39.033412] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:26:43.852 20:25:39 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.852 20:25:39 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:26:43.852 20:25:39 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:43.852 20:25:39 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:43.852 20:25:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:26:43.852 ************************************ 00:26:43.852 START TEST scheduler_create_thread 00:26:43.852 ************************************ 00:26:43.852 20:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:26:43.852 20:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:26:43.852 20:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.852 20:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:43.852 2 00:26:43.852 20:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.852 20:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:26:43.852 20:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.852 20:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:43.852 3 00:26:43.852 20:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.852 20:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:26:43.852 20:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.852 20:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:43.852 4 00:26:43.852 20:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.852 20:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:26:43.852 20:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.852 20:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:43.852 5 00:26:43.852 20:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.852 20:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:26:43.852 20:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.852 20:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:43.852 6 00:26:43.852 20:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.852 20:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:26:43.852 20:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.852 20:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:43.852 7 00:26:43.852 20:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.852 20:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:26:43.852 20:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.852 20:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:44.126 8 00:26:44.126 20:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.126 20:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:26:44.126 20:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.126 20:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:44.126 9 00:26:44.126 20:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.126 20:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:26:44.126 20:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.126 20:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:44.126 10 00:26:44.126 20:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.126 20:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:26:44.126 20:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.126 20:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:44.126 20:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.126 20:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:26:44.126 20:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:26:44.126 20:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.126 20:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:44.126 20:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.126 20:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:26:44.126 20:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.126 20:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:44.126 20:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.126 20:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:26:44.126 20:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:26:44.126 20:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.126 20:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:45.059 20:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.059 00:26:45.059 real 0m1.171s 00:26:45.059 user 0m0.013s 00:26:45.059 sys 0m0.005s 00:26:45.059 20:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:45.059 20:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:45.059 ************************************ 00:26:45.059 END TEST scheduler_create_thread 00:26:45.059 ************************************ 00:26:45.059 20:25:40 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:45.059 20:25:40 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60145 00:26:45.059 20:25:40 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 60145 ']' 00:26:45.059 20:25:40 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 60145 00:26:45.059 20:25:40 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:26:45.059 20:25:40 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:45.059 20:25:40 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60145 00:26:45.059 20:25:40 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:26:45.059 20:25:40 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:26:45.059 killing process with pid 60145 00:26:45.059 20:25:40 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60145' 00:26:45.059 20:25:40 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 60145 00:26:45.059 20:25:40 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 60145 00:26:45.624 [2024-10-01 20:25:40.695568] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:26:45.883 00:26:45.883 real 0m2.522s 00:26:45.883 user 0m2.932s 00:26:45.883 sys 0m0.362s 00:26:45.883 20:25:40 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:45.883 20:25:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:26:45.883 ************************************ 00:26:45.883 END TEST event_scheduler 00:26:45.883 ************************************ 00:26:45.883 20:25:40 event -- event/event.sh@51 -- # modprobe -n nbd 00:26:45.883 20:25:40 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:26:45.883 20:25:40 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:45.883 20:25:40 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:45.883 20:25:40 event -- common/autotest_common.sh@10 -- # set +x 00:26:45.883 ************************************ 00:26:45.883 START TEST app_repeat 00:26:45.883 ************************************ 00:26:45.883 20:25:40 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:26:45.883 20:25:40 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:45.883 20:25:40 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:45.883 20:25:40 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:26:45.883 20:25:40 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:26:45.883 20:25:40 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:26:45.883 20:25:40 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:26:45.883 20:25:40 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:26:45.883 20:25:40 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60227 00:26:45.883 20:25:40 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:26:45.883 20:25:40 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:26:45.883 Process app_repeat pid: 60227 00:26:45.883 20:25:40 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60227' 00:26:45.883 spdk_app_start Round 0 00:26:45.883 20:25:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:26:45.883 20:25:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:26:45.883 20:25:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60227 /var/tmp/spdk-nbd.sock 00:26:45.883 20:25:40 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60227 ']' 00:26:45.883 20:25:40 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:26:45.883 20:25:40 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:45.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:26:45.883 20:25:40 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:26:45.883 20:25:40 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:45.883 20:25:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:26:45.883 [2024-10-01 20:25:40.994481] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:26:45.883 [2024-10-01 20:25:40.994718] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60227 ] 00:26:45.883 [2024-10-01 20:25:41.136620] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:46.142 [2024-10-01 20:25:41.197804] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:46.142 [2024-10-01 20:25:41.197809] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:46.142 20:25:41 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:46.142 20:25:41 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:26:46.142 20:25:41 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:26:46.400 Malloc0 00:26:46.659 20:25:41 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:26:46.918 Malloc1 00:26:46.918 20:25:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:26:46.918 20:25:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:46.918 20:25:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:26:46.918 20:25:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:26:46.918 20:25:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:46.918 20:25:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:26:46.918 20:25:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:26:46.918 20:25:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:46.918 20:25:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:26:46.918 20:25:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:46.918 20:25:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:46.918 20:25:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:46.918 20:25:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:26:46.918 20:25:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:46.918 20:25:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:46.918 20:25:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:26:47.178 /dev/nbd0 00:26:47.178 20:25:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:47.178 20:25:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:47.178 20:25:42 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:26:47.178 20:25:42 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:26:47.178 20:25:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:47.178 20:25:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:47.178 20:25:42 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:26:47.178 20:25:42 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:26:47.178 20:25:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:47.178 20:25:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:47.178 20:25:42 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:26:47.178 1+0 records in 00:26:47.178 1+0 records out 00:26:47.178 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248426 s, 16.5 MB/s 00:26:47.178 20:25:42 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:47.178 20:25:42 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:26:47.178 20:25:42 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:47.178 20:25:42 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:47.178 20:25:42 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:26:47.178 20:25:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:47.178 20:25:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:47.178 20:25:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:26:47.504 /dev/nbd1 00:26:47.504 20:25:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:47.504 20:25:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:47.504 20:25:42 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:26:47.504 20:25:42 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:26:47.504 20:25:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:47.504 20:25:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:47.504 20:25:42 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:26:47.504 20:25:42 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:26:47.504 20:25:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:47.504 20:25:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:47.504 20:25:42 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:26:47.504 1+0 records in 00:26:47.504 1+0 records out 00:26:47.504 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000453098 s, 9.0 MB/s 00:26:47.504 20:25:42 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:47.504 20:25:42 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:26:47.504 20:25:42 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:47.504 20:25:42 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:47.504 20:25:42 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:26:47.504 20:25:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:47.504 20:25:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:47.504 20:25:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:47.504 20:25:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:47.504 20:25:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:48.072 20:25:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:26:48.072 { 00:26:48.072 "bdev_name": "Malloc0", 00:26:48.072 "nbd_device": "/dev/nbd0" 00:26:48.072 }, 00:26:48.072 { 00:26:48.072 "bdev_name": "Malloc1", 00:26:48.072 "nbd_device": "/dev/nbd1" 00:26:48.072 } 00:26:48.072 ]' 00:26:48.072 20:25:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:26:48.072 { 00:26:48.072 "bdev_name": "Malloc0", 00:26:48.072 "nbd_device": "/dev/nbd0" 00:26:48.072 }, 00:26:48.072 { 00:26:48.072 "bdev_name": "Malloc1", 00:26:48.072 "nbd_device": "/dev/nbd1" 00:26:48.072 } 00:26:48.072 ]' 00:26:48.072 20:25:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:48.072 20:25:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:26:48.072 /dev/nbd1' 00:26:48.072 20:25:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:26:48.072 /dev/nbd1' 00:26:48.072 20:25:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:48.072 20:25:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:26:48.072 20:25:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:26:48.072 20:25:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:26:48.072 20:25:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:26:48.072 20:25:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:26:48.072 20:25:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:48.072 20:25:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:48.072 20:25:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:26:48.072 20:25:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:26:48.072 20:25:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:26:48.072 20:25:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:26:48.072 256+0 records in 00:26:48.072 256+0 records out 00:26:48.072 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00869827 s, 121 MB/s 00:26:48.072 20:25:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:48.072 20:25:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:26:48.072 256+0 records in 00:26:48.072 256+0 records out 00:26:48.072 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0263785 s, 39.8 MB/s 00:26:48.072 20:25:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:48.072 20:25:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:26:48.072 256+0 records in 00:26:48.072 256+0 records out 00:26:48.072 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0270001 s, 38.8 MB/s 00:26:48.072 20:25:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:26:48.072 20:25:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:48.072 20:25:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:48.072 20:25:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:26:48.072 20:25:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:26:48.072 20:25:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:26:48.072 20:25:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:26:48.072 20:25:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:48.072 20:25:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:26:48.072 20:25:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:48.072 20:25:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:26:48.072 20:25:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:26:48.072 20:25:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:26:48.072 20:25:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:48.072 20:25:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:48.072 20:25:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:48.072 20:25:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:26:48.072 20:25:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:48.072 20:25:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:48.331 20:25:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:48.331 20:25:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:48.331 20:25:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:48.331 20:25:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:48.331 20:25:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:48.331 20:25:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:48.331 20:25:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:26:48.331 20:25:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:26:48.331 20:25:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:48.331 20:25:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:26:48.899 20:25:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:48.899 20:25:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:48.899 20:25:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:48.899 20:25:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:48.899 20:25:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:48.899 20:25:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:48.899 20:25:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:26:48.899 20:25:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:26:48.899 20:25:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:48.899 20:25:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:48.899 20:25:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:49.157 20:25:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:49.157 20:25:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:49.157 20:25:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:49.157 20:25:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:49.157 20:25:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:49.157 20:25:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:26:49.157 20:25:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:26:49.157 20:25:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:26:49.157 20:25:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:26:49.157 20:25:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:26:49.157 20:25:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:26:49.157 20:25:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:26:49.157 20:25:44 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:26:49.415 20:25:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:26:49.674 [2024-10-01 20:25:44.776325] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:49.674 [2024-10-01 20:25:44.836691] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:49.674 [2024-10-01 20:25:44.836701] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.674 [2024-10-01 20:25:44.868244] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:26:49.674 [2024-10-01 20:25:44.868320] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:26:52.996 spdk_app_start Round 1 00:26:52.996 20:25:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:26:52.996 20:25:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:26:52.996 20:25:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60227 /var/tmp/spdk-nbd.sock 00:26:52.996 20:25:47 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60227 ']' 00:26:52.996 20:25:47 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:26:52.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:26:52.996 20:25:47 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:52.996 20:25:47 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:26:52.996 20:25:47 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:52.996 20:25:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:26:52.996 20:25:47 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:52.996 20:25:47 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:26:52.996 20:25:47 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:26:52.996 Malloc0 00:26:52.996 20:25:48 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:26:53.253 Malloc1 00:26:53.253 20:25:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:26:53.253 20:25:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:53.253 20:25:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:26:53.253 20:25:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:26:53.253 20:25:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:53.253 20:25:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:26:53.253 20:25:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:26:53.253 20:25:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:53.254 20:25:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:26:53.254 20:25:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:53.254 20:25:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:53.254 20:25:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:53.254 20:25:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:26:53.254 20:25:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:53.254 20:25:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:53.254 20:25:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:26:53.819 /dev/nbd0 00:26:53.819 20:25:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:53.819 20:25:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:53.819 20:25:48 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:26:53.819 20:25:48 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:26:53.819 20:25:48 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:53.819 20:25:48 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:53.819 20:25:48 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:26:53.819 20:25:48 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:26:53.819 20:25:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:53.819 20:25:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:53.819 20:25:48 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:26:53.819 1+0 records in 00:26:53.819 1+0 records out 00:26:53.819 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264318 s, 15.5 MB/s 00:26:53.819 20:25:48 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:53.819 20:25:48 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:26:53.819 20:25:48 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:53.819 20:25:48 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:53.819 20:25:48 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:26:53.820 20:25:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:53.820 20:25:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:53.820 20:25:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:26:54.079 /dev/nbd1 00:26:54.079 20:25:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:54.079 20:25:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:54.079 20:25:49 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:26:54.079 20:25:49 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:26:54.079 20:25:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:54.079 20:25:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:54.079 20:25:49 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:26:54.079 20:25:49 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:26:54.079 20:25:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:54.079 20:25:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:54.079 20:25:49 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:26:54.079 1+0 records in 00:26:54.079 1+0 records out 00:26:54.079 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347303 s, 11.8 MB/s 00:26:54.079 20:25:49 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:54.079 20:25:49 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:26:54.079 20:25:49 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:54.079 20:25:49 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:54.079 20:25:49 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:26:54.079 20:25:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:54.079 20:25:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:54.079 20:25:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:54.079 20:25:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:54.079 20:25:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:54.337 20:25:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:26:54.337 { 00:26:54.337 "bdev_name": "Malloc0", 00:26:54.337 "nbd_device": "/dev/nbd0" 00:26:54.337 }, 00:26:54.337 { 00:26:54.337 "bdev_name": "Malloc1", 00:26:54.337 "nbd_device": "/dev/nbd1" 00:26:54.337 } 00:26:54.337 ]' 00:26:54.337 20:25:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:54.337 20:25:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:26:54.337 { 00:26:54.337 "bdev_name": "Malloc0", 00:26:54.337 "nbd_device": "/dev/nbd0" 00:26:54.337 }, 00:26:54.337 { 00:26:54.337 "bdev_name": "Malloc1", 00:26:54.337 "nbd_device": "/dev/nbd1" 00:26:54.337 } 00:26:54.337 ]' 00:26:54.337 20:25:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:26:54.337 /dev/nbd1' 00:26:54.337 20:25:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:26:54.337 /dev/nbd1' 00:26:54.337 20:25:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:54.337 20:25:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:26:54.337 20:25:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:26:54.337 20:25:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:26:54.337 20:25:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:26:54.337 20:25:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:26:54.337 20:25:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:54.337 20:25:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:54.337 20:25:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:26:54.337 20:25:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:26:54.337 20:25:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:26:54.337 20:25:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:26:54.337 256+0 records in 00:26:54.337 256+0 records out 00:26:54.337 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0072019 s, 146 MB/s 00:26:54.337 20:25:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:54.338 20:25:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:26:54.596 256+0 records in 00:26:54.596 256+0 records out 00:26:54.596 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0229003 s, 45.8 MB/s 00:26:54.596 20:25:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:54.596 20:25:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:26:54.596 256+0 records in 00:26:54.596 256+0 records out 00:26:54.596 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0290387 s, 36.1 MB/s 00:26:54.596 20:25:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:26:54.596 20:25:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:54.596 20:25:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:54.596 20:25:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:26:54.596 20:25:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:26:54.596 20:25:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:26:54.596 20:25:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:26:54.596 20:25:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:54.596 20:25:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:26:54.596 20:25:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:54.596 20:25:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:26:54.596 20:25:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:26:54.596 20:25:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:26:54.596 20:25:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:54.596 20:25:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:54.596 20:25:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:54.596 20:25:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:26:54.596 20:25:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:54.596 20:25:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:54.854 20:25:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:54.854 20:25:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:54.854 20:25:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:54.854 20:25:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:54.854 20:25:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:54.854 20:25:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:54.854 20:25:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:26:54.854 20:25:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:26:54.854 20:25:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:54.854 20:25:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:26:55.113 20:25:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:55.113 20:25:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:55.113 20:25:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:55.113 20:25:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:55.113 20:25:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:55.113 20:25:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:55.113 20:25:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:26:55.113 20:25:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:26:55.113 20:25:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:55.113 20:25:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:55.113 20:25:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:55.679 20:25:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:55.679 20:25:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:55.679 20:25:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:55.679 20:25:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:55.679 20:25:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:26:55.679 20:25:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:55.679 20:25:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:26:55.679 20:25:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:26:55.679 20:25:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:26:55.679 20:25:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:26:55.679 20:25:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:26:55.679 20:25:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:26:55.679 20:25:50 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:26:55.938 20:25:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:26:56.196 [2024-10-01 20:25:51.252037] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:56.196 [2024-10-01 20:25:51.310838] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:56.196 [2024-10-01 20:25:51.310850] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.196 [2024-10-01 20:25:51.341992] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:26:56.196 [2024-10-01 20:25:51.342053] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:26:59.498 20:25:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:26:59.498 spdk_app_start Round 2 00:26:59.498 20:25:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:26:59.498 20:25:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60227 /var/tmp/spdk-nbd.sock 00:26:59.498 20:25:54 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60227 ']' 00:26:59.498 20:25:54 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:26:59.498 20:25:54 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:59.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:26:59.498 20:25:54 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:26:59.498 20:25:54 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:59.498 20:25:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:26:59.498 20:25:54 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:59.498 20:25:54 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:26:59.498 20:25:54 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:26:59.498 Malloc0 00:26:59.498 20:25:54 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:26:59.756 Malloc1 00:26:59.756 20:25:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:26:59.756 20:25:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:59.756 20:25:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:26:59.756 20:25:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:26:59.756 20:25:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:59.756 20:25:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:26:59.756 20:25:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:26:59.756 20:25:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:59.756 20:25:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:26:59.756 20:25:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:59.756 20:25:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:59.756 20:25:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:59.756 20:25:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:26:59.756 20:25:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:59.757 20:25:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:59.757 20:25:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:27:00.014 /dev/nbd0 00:27:00.014 20:25:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:00.014 20:25:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:00.014 20:25:55 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:27:00.014 20:25:55 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:27:00.014 20:25:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:00.014 20:25:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:00.014 20:25:55 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:27:00.014 20:25:55 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:27:00.014 20:25:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:00.014 20:25:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:00.014 20:25:55 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:27:00.014 1+0 records in 00:27:00.014 1+0 records out 00:27:00.014 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282085 s, 14.5 MB/s 00:27:00.014 20:25:55 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:27:00.014 20:25:55 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:27:00.014 20:25:55 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:27:00.272 20:25:55 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:00.272 20:25:55 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:27:00.272 20:25:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:00.272 20:25:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:00.272 20:25:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:27:00.531 /dev/nbd1 00:27:00.531 20:25:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:00.531 20:25:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:00.531 20:25:55 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:27:00.531 20:25:55 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:27:00.531 20:25:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:00.531 20:25:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:00.531 20:25:55 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:27:00.531 20:25:55 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:27:00.531 20:25:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:00.531 20:25:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:00.531 20:25:55 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:27:00.531 1+0 records in 00:27:00.531 1+0 records out 00:27:00.531 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312783 s, 13.1 MB/s 00:27:00.531 20:25:55 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:27:00.531 20:25:55 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:27:00.531 20:25:55 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:27:00.531 20:25:55 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:00.531 20:25:55 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:27:00.531 20:25:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:00.531 20:25:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:00.531 20:25:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:00.531 20:25:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:00.531 20:25:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:00.789 20:25:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:27:00.789 { 00:27:00.789 "bdev_name": "Malloc0", 00:27:00.789 "nbd_device": "/dev/nbd0" 00:27:00.789 }, 00:27:00.789 { 00:27:00.789 "bdev_name": "Malloc1", 00:27:00.789 "nbd_device": "/dev/nbd1" 00:27:00.789 } 00:27:00.789 ]' 00:27:00.789 20:25:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:27:00.789 { 00:27:00.789 "bdev_name": "Malloc0", 00:27:00.789 "nbd_device": "/dev/nbd0" 00:27:00.789 }, 00:27:00.789 { 00:27:00.789 "bdev_name": "Malloc1", 00:27:00.789 "nbd_device": "/dev/nbd1" 00:27:00.789 } 00:27:00.789 ]' 00:27:00.789 20:25:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:00.790 20:25:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:27:00.790 /dev/nbd1' 00:27:00.790 20:25:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:27:00.790 /dev/nbd1' 00:27:00.790 20:25:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:00.790 20:25:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:27:00.790 20:25:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:27:00.790 20:25:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:27:00.790 20:25:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:27:00.790 20:25:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:27:00.790 20:25:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:00.790 20:25:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:00.790 20:25:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:27:00.790 20:25:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:27:00.790 20:25:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:27:00.790 20:25:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:27:00.790 256+0 records in 00:27:00.790 256+0 records out 00:27:00.790 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00709203 s, 148 MB/s 00:27:00.790 20:25:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:00.790 20:25:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:27:01.048 256+0 records in 00:27:01.048 256+0 records out 00:27:01.048 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0277957 s, 37.7 MB/s 00:27:01.048 20:25:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:01.048 20:25:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:27:01.048 256+0 records in 00:27:01.048 256+0 records out 00:27:01.048 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0279543 s, 37.5 MB/s 00:27:01.048 20:25:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:27:01.048 20:25:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:01.048 20:25:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:01.048 20:25:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:27:01.048 20:25:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:27:01.048 20:25:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:27:01.048 20:25:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:27:01.048 20:25:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:01.048 20:25:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:27:01.048 20:25:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:01.048 20:25:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:27:01.048 20:25:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:27:01.048 20:25:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:27:01.048 20:25:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:01.048 20:25:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:01.048 20:25:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:01.048 20:25:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:27:01.048 20:25:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:01.048 20:25:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:01.306 20:25:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:01.306 20:25:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:01.306 20:25:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:01.306 20:25:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:01.306 20:25:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:01.306 20:25:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:01.306 20:25:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:27:01.306 20:25:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:27:01.306 20:25:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:01.306 20:25:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:27:01.564 20:25:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:01.564 20:25:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:01.564 20:25:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:01.564 20:25:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:01.564 20:25:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:01.564 20:25:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:01.564 20:25:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:27:01.564 20:25:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:27:01.564 20:25:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:01.564 20:25:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:01.564 20:25:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:01.824 20:25:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:01.824 20:25:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:01.824 20:25:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:02.082 20:25:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:02.082 20:25:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:27:02.082 20:25:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:02.082 20:25:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:27:02.082 20:25:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:27:02.082 20:25:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:27:02.082 20:25:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:27:02.082 20:25:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:27:02.082 20:25:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:27:02.082 20:25:57 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:27:02.340 20:25:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:27:02.598 [2024-10-01 20:25:57.691482] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:02.598 [2024-10-01 20:25:57.751029] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:02.598 [2024-10-01 20:25:57.751042] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.598 [2024-10-01 20:25:57.782004] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:27:02.598 [2024-10-01 20:25:57.782065] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:27:05.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:27:05.882 20:26:00 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60227 /var/tmp/spdk-nbd.sock 00:27:05.882 20:26:00 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60227 ']' 00:27:05.882 20:26:00 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:27:05.882 20:26:00 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:05.882 20:26:00 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:27:05.882 20:26:00 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:05.882 20:26:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:27:05.882 20:26:00 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:05.882 20:26:00 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:27:05.882 20:26:00 event.app_repeat -- event/event.sh@39 -- # killprocess 60227 00:27:05.882 20:26:00 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 60227 ']' 00:27:05.882 20:26:00 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 60227 00:27:05.882 20:26:00 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:27:05.882 20:26:00 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:05.882 20:26:00 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60227 00:27:05.882 killing process with pid 60227 00:27:05.882 20:26:00 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:05.882 20:26:00 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:05.882 20:26:00 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60227' 00:27:05.882 20:26:00 event.app_repeat -- common/autotest_common.sh@969 -- # kill 60227 00:27:05.882 20:26:00 event.app_repeat -- common/autotest_common.sh@974 -- # wait 60227 00:27:05.882 spdk_app_start is called in Round 0. 00:27:05.882 Shutdown signal received, stop current app iteration 00:27:05.882 Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 reinitialization... 00:27:05.882 spdk_app_start is called in Round 1. 00:27:05.882 Shutdown signal received, stop current app iteration 00:27:05.882 Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 reinitialization... 00:27:05.882 spdk_app_start is called in Round 2. 00:27:05.882 Shutdown signal received, stop current app iteration 00:27:05.882 Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 reinitialization... 00:27:05.882 spdk_app_start is called in Round 3. 00:27:05.882 Shutdown signal received, stop current app iteration 00:27:05.882 20:26:01 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:27:05.882 20:26:01 event.app_repeat -- event/event.sh@42 -- # return 0 00:27:05.882 00:27:05.882 real 0m20.053s 00:27:05.882 user 0m46.251s 00:27:05.882 sys 0m3.104s 00:27:05.882 ************************************ 00:27:05.882 END TEST app_repeat 00:27:05.882 ************************************ 00:27:05.882 20:26:01 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:05.882 20:26:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:27:05.882 20:26:01 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:27:05.882 20:26:01 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:27:05.882 20:26:01 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:05.882 20:26:01 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:05.882 20:26:01 event -- common/autotest_common.sh@10 -- # set +x 00:27:05.882 ************************************ 00:27:05.882 START TEST cpu_locks 00:27:05.882 ************************************ 00:27:05.882 20:26:01 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:27:05.882 * Looking for test storage... 00:27:06.141 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:27:06.141 20:26:01 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:06.141 20:26:01 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:06.141 20:26:01 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:27:06.141 20:26:01 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:06.141 20:26:01 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:06.141 20:26:01 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:06.141 20:26:01 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:06.141 20:26:01 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:27:06.141 20:26:01 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:27:06.141 20:26:01 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:27:06.141 20:26:01 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:27:06.141 20:26:01 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:27:06.141 20:26:01 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:27:06.141 20:26:01 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:27:06.141 20:26:01 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:06.141 20:26:01 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:27:06.141 20:26:01 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:27:06.141 20:26:01 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:06.141 20:26:01 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:06.141 20:26:01 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:27:06.141 20:26:01 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:27:06.141 20:26:01 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:06.141 20:26:01 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:27:06.141 20:26:01 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:27:06.141 20:26:01 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:27:06.141 20:26:01 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:27:06.141 20:26:01 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:06.141 20:26:01 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:27:06.141 20:26:01 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:27:06.141 20:26:01 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:06.141 20:26:01 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:06.141 20:26:01 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:27:06.141 20:26:01 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:06.141 20:26:01 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:06.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.141 --rc genhtml_branch_coverage=1 00:27:06.141 --rc genhtml_function_coverage=1 00:27:06.141 --rc genhtml_legend=1 00:27:06.141 --rc geninfo_all_blocks=1 00:27:06.141 --rc geninfo_unexecuted_blocks=1 00:27:06.141 00:27:06.141 ' 00:27:06.141 20:26:01 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:06.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.142 --rc genhtml_branch_coverage=1 00:27:06.142 --rc genhtml_function_coverage=1 00:27:06.142 --rc genhtml_legend=1 00:27:06.142 --rc geninfo_all_blocks=1 00:27:06.142 --rc geninfo_unexecuted_blocks=1 00:27:06.142 00:27:06.142 ' 00:27:06.142 20:26:01 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:06.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.142 --rc genhtml_branch_coverage=1 00:27:06.142 --rc genhtml_function_coverage=1 00:27:06.142 --rc genhtml_legend=1 00:27:06.142 --rc geninfo_all_blocks=1 00:27:06.142 --rc geninfo_unexecuted_blocks=1 00:27:06.142 00:27:06.142 ' 00:27:06.142 20:26:01 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:06.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.142 --rc genhtml_branch_coverage=1 00:27:06.142 --rc genhtml_function_coverage=1 00:27:06.142 --rc genhtml_legend=1 00:27:06.142 --rc geninfo_all_blocks=1 00:27:06.142 --rc geninfo_unexecuted_blocks=1 00:27:06.142 00:27:06.142 ' 00:27:06.142 20:26:01 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:27:06.142 20:26:01 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:27:06.142 20:26:01 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:27:06.142 20:26:01 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:27:06.142 20:26:01 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:06.142 20:26:01 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:06.142 20:26:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:27:06.142 ************************************ 00:27:06.142 START TEST default_locks 00:27:06.142 ************************************ 00:27:06.142 20:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:27:06.142 20:26:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60869 00:27:06.142 20:26:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60869 00:27:06.142 20:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 60869 ']' 00:27:06.142 20:26:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:27:06.142 20:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:06.142 20:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:06.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:06.142 20:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:06.142 20:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:06.142 20:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:27:06.142 [2024-10-01 20:26:01.303762] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:27:06.142 [2024-10-01 20:26:01.303863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60869 ] 00:27:06.400 [2024-10-01 20:26:01.439202] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.400 [2024-10-01 20:26:01.508604] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.658 20:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:06.658 20:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:27:06.658 20:26:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60869 00:27:06.658 20:26:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60869 00:27:06.658 20:26:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:27:07.224 20:26:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60869 00:27:07.224 20:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 60869 ']' 00:27:07.224 20:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 60869 00:27:07.224 20:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:27:07.224 20:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:07.224 20:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60869 00:27:07.224 20:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:07.224 20:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:07.224 20:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60869' 00:27:07.224 killing process with pid 60869 00:27:07.224 20:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 60869 00:27:07.224 20:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 60869 00:27:07.791 20:26:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60869 00:27:07.791 20:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:27:07.791 20:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60869 00:27:07.791 20:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:27:07.791 20:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:07.791 20:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:27:07.791 20:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:07.791 20:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 60869 00:27:07.791 20:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 60869 ']' 00:27:07.791 20:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:07.791 20:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:07.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:07.791 20:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:07.791 20:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:07.791 20:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:27:07.791 ERROR: process (pid: 60869) is no longer running 00:27:07.791 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60869) - No such process 00:27:07.791 20:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:07.791 20:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:27:07.791 20:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:27:07.791 20:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:07.791 20:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:07.791 20:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:07.791 20:26:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:27:07.791 20:26:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:27:07.791 20:26:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:27:07.791 20:26:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:27:07.791 00:27:07.791 real 0m1.546s 00:27:07.791 user 0m1.622s 00:27:07.791 sys 0m0.645s 00:27:07.791 20:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:07.791 ************************************ 00:27:07.791 END TEST default_locks 00:27:07.791 20:26:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:27:07.791 ************************************ 00:27:07.791 20:26:02 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:27:07.791 20:26:02 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:07.791 20:26:02 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:07.791 20:26:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:27:07.791 ************************************ 00:27:07.791 START TEST default_locks_via_rpc 00:27:07.791 ************************************ 00:27:07.791 20:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:27:07.791 20:26:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60920 00:27:07.791 20:26:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60920 00:27:07.791 20:26:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:27:07.791 20:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60920 ']' 00:27:07.791 20:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:07.791 20:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:07.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:07.791 20:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:07.791 20:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:07.791 20:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:07.791 [2024-10-01 20:26:02.913991] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:27:07.791 [2024-10-01 20:26:02.914093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60920 ] 00:27:08.049 [2024-10-01 20:26:03.052039] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.049 [2024-10-01 20:26:03.111441] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.983 20:26:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:08.983 20:26:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:27:08.983 20:26:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:27:08.983 20:26:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.983 20:26:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:08.983 20:26:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.983 20:26:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:27:08.983 20:26:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:27:08.983 20:26:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:27:08.983 20:26:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:27:08.983 20:26:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:27:08.983 20:26:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.983 20:26:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:08.983 20:26:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.983 20:26:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60920 00:27:08.983 20:26:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60920 00:27:08.983 20:26:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:27:09.550 20:26:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60920 00:27:09.550 20:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 60920 ']' 00:27:09.550 20:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 60920 00:27:09.550 20:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:27:09.550 20:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:09.550 20:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60920 00:27:09.550 20:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:09.550 20:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:09.550 20:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60920' 00:27:09.550 killing process with pid 60920 00:27:09.550 20:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 60920 00:27:09.550 20:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 60920 00:27:09.809 00:27:09.809 real 0m2.152s 00:27:09.809 user 0m2.476s 00:27:09.809 sys 0m0.639s 00:27:09.809 20:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:09.809 20:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:09.809 ************************************ 00:27:09.809 END TEST default_locks_via_rpc 00:27:09.809 ************************************ 00:27:09.809 20:26:05 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:27:09.809 20:26:05 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:09.809 20:26:05 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:09.809 20:26:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:27:09.809 ************************************ 00:27:09.809 START TEST non_locking_app_on_locked_coremask 00:27:09.809 ************************************ 00:27:09.809 20:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:27:09.809 20:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60993 00:27:09.809 20:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60993 /var/tmp/spdk.sock 00:27:09.809 20:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:27:09.809 20:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60993 ']' 00:27:09.809 20:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:09.809 20:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:09.809 20:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:09.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:09.809 20:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:09.809 20:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:10.068 [2024-10-01 20:26:05.105303] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:27:10.068 [2024-10-01 20:26:05.105401] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60993 ] 00:27:10.068 [2024-10-01 20:26:05.241504] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.068 [2024-10-01 20:26:05.301477] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.326 20:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:10.326 20:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:27:10.326 20:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:27:10.326 20:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=61003 00:27:10.326 20:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 61003 /var/tmp/spdk2.sock 00:27:10.326 20:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 61003 ']' 00:27:10.326 20:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:27:10.326 20:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:10.326 20:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:27:10.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:27:10.326 20:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:10.326 20:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:10.585 [2024-10-01 20:26:05.608890] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:27:10.585 [2024-10-01 20:26:05.609011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61003 ] 00:27:10.585 [2024-10-01 20:26:05.762581] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:27:10.585 [2024-10-01 20:26:05.762656] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.843 [2024-10-01 20:26:05.903547] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:11.411 20:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:11.411 20:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:27:11.411 20:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60993 00:27:11.411 20:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60993 00:27:11.411 20:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:27:12.788 20:26:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60993 00:27:12.788 20:26:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60993 ']' 00:27:12.788 20:26:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60993 00:27:12.788 20:26:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:27:12.788 20:26:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:12.788 20:26:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60993 00:27:12.788 20:26:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:12.788 20:26:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:12.788 killing process with pid 60993 00:27:12.788 20:26:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60993' 00:27:12.788 20:26:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60993 00:27:12.788 20:26:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60993 00:27:13.355 20:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 61003 00:27:13.355 20:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 61003 ']' 00:27:13.355 20:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 61003 00:27:13.355 20:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:27:13.355 20:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:13.355 20:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61003 00:27:13.355 20:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:13.355 20:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:13.355 killing process with pid 61003 00:27:13.355 20:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61003' 00:27:13.355 20:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 61003 00:27:13.355 20:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 61003 00:27:13.922 00:27:13.922 real 0m3.856s 00:27:13.922 user 0m4.419s 00:27:13.922 sys 0m1.213s 00:27:13.922 20:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:13.922 20:26:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:13.922 ************************************ 00:27:13.922 END TEST non_locking_app_on_locked_coremask 00:27:13.922 ************************************ 00:27:13.922 20:26:08 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:27:13.922 20:26:08 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:13.922 20:26:08 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:13.922 20:26:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:27:13.922 ************************************ 00:27:13.922 START TEST locking_app_on_unlocked_coremask 00:27:13.922 ************************************ 00:27:13.922 20:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:27:13.922 20:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=61088 00:27:13.922 20:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:27:13.922 20:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 61088 /var/tmp/spdk.sock 00:27:13.922 20:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 61088 ']' 00:27:13.922 20:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:13.922 20:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:13.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:13.922 20:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:13.922 20:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:13.922 20:26:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:13.922 [2024-10-01 20:26:09.022428] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:27:13.922 [2024-10-01 20:26:09.022580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61088 ] 00:27:13.922 [2024-10-01 20:26:09.167072] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:27:13.922 [2024-10-01 20:26:09.167159] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.182 [2024-10-01 20:26:09.239216] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:14.822 20:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:14.822 20:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:27:14.822 20:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=61116 00:27:14.822 20:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:27:14.822 20:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 61116 /var/tmp/spdk2.sock 00:27:14.823 20:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 61116 ']' 00:27:14.823 20:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:27:14.823 20:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:14.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:27:14.823 20:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:27:14.823 20:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:14.823 20:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:15.080 [2024-10-01 20:26:10.083003] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:27:15.081 [2024-10-01 20:26:10.083095] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61116 ] 00:27:15.081 [2024-10-01 20:26:10.224691] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.339 [2024-10-01 20:26:10.339300] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.906 20:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:15.906 20:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:27:15.906 20:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 61116 00:27:15.906 20:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:27:15.906 20:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61116 00:27:17.280 20:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 61088 00:27:17.280 20:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 61088 ']' 00:27:17.280 20:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 61088 00:27:17.280 20:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:27:17.280 20:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:17.280 20:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61088 00:27:17.280 20:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:17.280 20:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:17.280 killing process with pid 61088 00:27:17.280 20:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61088' 00:27:17.280 20:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 61088 00:27:17.280 20:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 61088 00:27:18.215 20:26:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 61116 00:27:18.215 20:26:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 61116 ']' 00:27:18.215 20:26:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 61116 00:27:18.215 20:26:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:27:18.215 20:26:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:18.215 20:26:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61116 00:27:18.215 killing process with pid 61116 00:27:18.215 20:26:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:18.215 20:26:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:18.215 20:26:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61116' 00:27:18.215 20:26:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 61116 00:27:18.215 20:26:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 61116 00:27:18.215 ************************************ 00:27:18.215 END TEST locking_app_on_unlocked_coremask 00:27:18.215 ************************************ 00:27:18.215 00:27:18.215 real 0m4.522s 00:27:18.215 user 0m5.276s 00:27:18.215 sys 0m1.298s 00:27:18.215 20:26:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:18.215 20:26:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:18.473 20:26:13 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:27:18.473 20:26:13 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:18.473 20:26:13 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:18.473 20:26:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:27:18.473 ************************************ 00:27:18.473 START TEST locking_app_on_locked_coremask 00:27:18.473 ************************************ 00:27:18.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:18.473 20:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:27:18.473 20:26:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61200 00:27:18.473 20:26:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 61200 /var/tmp/spdk.sock 00:27:18.473 20:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 61200 ']' 00:27:18.473 20:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:18.473 20:26:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:27:18.473 20:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:18.473 20:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:18.473 20:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:18.473 20:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:18.473 [2024-10-01 20:26:13.569958] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:27:18.473 [2024-10-01 20:26:13.570064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61200 ] 00:27:18.473 [2024-10-01 20:26:13.705430] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.732 [2024-10-01 20:26:13.765484] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.036 20:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:19.037 20:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:27:19.037 20:26:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61215 00:27:19.037 20:26:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:27:19.037 20:26:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61215 /var/tmp/spdk2.sock 00:27:19.037 20:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:27:19.037 20:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 61215 /var/tmp/spdk2.sock 00:27:19.037 20:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:27:19.037 20:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:19.037 20:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:27:19.037 20:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:19.037 20:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 61215 /var/tmp/spdk2.sock 00:27:19.037 20:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 61215 ']' 00:27:19.037 20:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:27:19.037 20:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:19.037 20:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:27:19.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:27:19.037 20:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:19.037 20:26:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:19.037 [2024-10-01 20:26:14.056270] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:27:19.037 [2024-10-01 20:26:14.056577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61215 ] 00:27:19.037 [2024-10-01 20:26:14.199483] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61200 has claimed it. 00:27:19.037 [2024-10-01 20:26:14.199561] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:27:19.604 ERROR: process (pid: 61215) is no longer running 00:27:19.604 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (61215) - No such process 00:27:19.604 20:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:19.604 20:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:27:19.604 20:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:27:19.604 20:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:19.604 20:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:19.604 20:26:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:19.604 20:26:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 61200 00:27:19.604 20:26:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61200 00:27:19.604 20:26:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:27:20.540 20:26:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 61200 00:27:20.540 20:26:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 61200 ']' 00:27:20.540 20:26:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 61200 00:27:20.540 20:26:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:27:20.540 20:26:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:20.540 20:26:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61200 00:27:20.540 killing process with pid 61200 00:27:20.540 20:26:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:20.540 20:26:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:20.540 20:26:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61200' 00:27:20.540 20:26:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 61200 00:27:20.540 20:26:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 61200 00:27:20.800 ************************************ 00:27:20.800 END TEST locking_app_on_locked_coremask 00:27:20.800 ************************************ 00:27:20.800 00:27:20.800 real 0m2.389s 00:27:20.800 user 0m2.770s 00:27:20.800 sys 0m0.707s 00:27:20.800 20:26:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:20.800 20:26:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:20.800 20:26:15 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:27:20.800 20:26:15 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:20.800 20:26:15 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:20.800 20:26:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:27:20.800 ************************************ 00:27:20.800 START TEST locking_overlapped_coremask 00:27:20.800 ************************************ 00:27:20.800 20:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:27:20.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:20.800 20:26:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61272 00:27:20.800 20:26:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61272 /var/tmp/spdk.sock 00:27:20.800 20:26:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:27:20.800 20:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 61272 ']' 00:27:20.800 20:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:20.800 20:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:20.800 20:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:20.800 20:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:20.800 20:26:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:20.800 [2024-10-01 20:26:16.021312] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:27:20.800 [2024-10-01 20:26:16.021683] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61272 ] 00:27:21.058 [2024-10-01 20:26:16.161695] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:21.058 [2024-10-01 20:26:16.223916] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:21.058 [2024-10-01 20:26:16.224044] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:27:21.058 [2024-10-01 20:26:16.224223] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.316 20:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:21.316 20:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:27:21.316 20:26:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61284 00:27:21.316 20:26:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:27:21.316 20:26:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61284 /var/tmp/spdk2.sock 00:27:21.317 20:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:27:21.317 20:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 61284 /var/tmp/spdk2.sock 00:27:21.317 20:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:27:21.317 20:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:21.317 20:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:27:21.317 20:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:21.317 20:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 61284 /var/tmp/spdk2.sock 00:27:21.317 20:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 61284 ']' 00:27:21.317 20:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:27:21.317 20:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:21.317 20:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:27:21.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:27:21.317 20:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:21.317 20:26:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:21.317 [2024-10-01 20:26:16.540603] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:27:21.317 [2024-10-01 20:26:16.541032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61284 ] 00:27:21.575 [2024-10-01 20:26:16.692701] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61272 has claimed it. 00:27:21.575 [2024-10-01 20:26:16.692929] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:27:22.142 ERROR: process (pid: 61284) is no longer running 00:27:22.142 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (61284) - No such process 00:27:22.142 20:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:22.142 20:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:27:22.142 20:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:27:22.142 20:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:22.142 20:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:22.142 20:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:22.142 20:26:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:27:22.142 20:26:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:27:22.142 20:26:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:27:22.142 20:26:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:27:22.142 20:26:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61272 00:27:22.142 20:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 61272 ']' 00:27:22.142 20:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 61272 00:27:22.142 20:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:27:22.142 20:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:22.142 20:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61272 00:27:22.142 killing process with pid 61272 00:27:22.142 20:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:22.142 20:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:22.142 20:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61272' 00:27:22.142 20:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 61272 00:27:22.142 20:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 61272 00:27:22.400 ************************************ 00:27:22.400 END TEST locking_overlapped_coremask 00:27:22.400 ************************************ 00:27:22.400 00:27:22.400 real 0m1.669s 00:27:22.400 user 0m4.508s 00:27:22.400 sys 0m0.350s 00:27:22.401 20:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:22.401 20:26:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:22.660 20:26:17 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:27:22.660 20:26:17 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:22.660 20:26:17 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:22.660 20:26:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:27:22.660 ************************************ 00:27:22.660 START TEST locking_overlapped_coremask_via_rpc 00:27:22.660 ************************************ 00:27:22.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:22.660 20:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:27:22.660 20:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61334 00:27:22.660 20:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:27:22.660 20:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61334 /var/tmp/spdk.sock 00:27:22.660 20:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61334 ']' 00:27:22.660 20:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:22.660 20:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:22.660 20:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:22.660 20:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:22.660 20:26:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:22.660 [2024-10-01 20:26:17.718492] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:27:22.660 [2024-10-01 20:26:17.718583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61334 ] 00:27:22.660 [2024-10-01 20:26:17.851999] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:27:22.660 [2024-10-01 20:26:17.852283] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:22.660 [2024-10-01 20:26:17.912496] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.660 [2024-10-01 20:26:17.912541] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:27:22.660 [2024-10-01 20:26:17.912548] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.919 20:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:22.919 20:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:27:22.919 20:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:27:22.919 20:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61351 00:27:22.919 20:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61351 /var/tmp/spdk2.sock 00:27:22.919 20:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61351 ']' 00:27:22.919 20:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:27:22.919 20:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:22.919 20:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:27:22.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:27:22.919 20:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:22.919 20:26:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:23.196 [2024-10-01 20:26:18.222239] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:27:23.196 [2024-10-01 20:26:18.222812] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61351 ] 00:27:23.196 [2024-10-01 20:26:18.368886] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:27:23.196 [2024-10-01 20:26:18.369098] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:23.455 [2024-10-01 20:26:18.490940] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:27:23.455 [2024-10-01 20:26:18.494168] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:27:23.455 [2024-10-01 20:26:18.494172] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:27:24.389 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:24.389 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:27:24.389 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:27:24.389 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.389 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:24.389 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.389 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:27:24.389 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:27:24.389 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:27:24.389 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:24.389 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:24.389 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:24.389 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:24.389 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:27:24.389 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.389 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:24.389 [2024-10-01 20:26:19.329251] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61334 has claimed it. 00:27:24.389 2024/10/01 20:26:19 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:27:24.389 request: 00:27:24.389 { 00:27:24.389 "method": "framework_enable_cpumask_locks", 00:27:24.389 "params": {} 00:27:24.389 } 00:27:24.389 Got JSON-RPC error response 00:27:24.389 GoRPCClient: error on JSON-RPC call 00:27:24.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:24.389 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:24.389 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:27:24.389 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:24.389 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:24.389 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:24.389 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61334 /var/tmp/spdk.sock 00:27:24.389 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61334 ']' 00:27:24.389 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:24.389 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:24.389 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:24.389 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:24.389 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:24.648 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:24.648 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:27:24.648 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61351 /var/tmp/spdk2.sock 00:27:24.648 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61351 ']' 00:27:24.648 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:27:24.648 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:24.648 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:27:24.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:27:24.648 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:24.648 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:24.907 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:24.907 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:27:24.907 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:27:24.907 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:27:24.907 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:27:24.908 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:27:24.908 00:27:24.908 real 0m2.315s 00:27:24.908 user 0m1.365s 00:27:24.908 sys 0m0.173s 00:27:24.908 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:24.908 20:26:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:24.908 ************************************ 00:27:24.908 END TEST locking_overlapped_coremask_via_rpc 00:27:24.908 ************************************ 00:27:24.908 20:26:20 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:27:24.908 20:26:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61334 ]] 00:27:24.908 20:26:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61334 00:27:24.908 20:26:20 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61334 ']' 00:27:24.908 20:26:20 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61334 00:27:24.908 20:26:20 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:27:24.908 20:26:20 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:24.908 20:26:20 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61334 00:27:24.908 killing process with pid 61334 00:27:24.908 20:26:20 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:24.908 20:26:20 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:24.908 20:26:20 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61334' 00:27:24.908 20:26:20 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 61334 00:27:24.908 20:26:20 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 61334 00:27:25.167 20:26:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61351 ]] 00:27:25.167 20:26:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61351 00:27:25.167 20:26:20 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61351 ']' 00:27:25.167 20:26:20 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61351 00:27:25.167 20:26:20 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:27:25.167 20:26:20 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:25.167 20:26:20 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61351 00:27:25.425 killing process with pid 61351 00:27:25.425 20:26:20 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:27:25.425 20:26:20 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:27:25.425 20:26:20 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61351' 00:27:25.425 20:26:20 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 61351 00:27:25.425 20:26:20 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 61351 00:27:25.684 20:26:20 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:27:25.684 20:26:20 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:27:25.684 20:26:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61334 ]] 00:27:25.684 20:26:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61334 00:27:25.684 20:26:20 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61334 ']' 00:27:25.684 Process with pid 61334 is not found 00:27:25.684 20:26:20 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61334 00:27:25.684 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (61334) - No such process 00:27:25.684 20:26:20 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 61334 is not found' 00:27:25.684 20:26:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61351 ]] 00:27:25.684 20:26:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61351 00:27:25.684 20:26:20 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61351 ']' 00:27:25.684 Process with pid 61351 is not found 00:27:25.684 20:26:20 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61351 00:27:25.684 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (61351) - No such process 00:27:25.684 20:26:20 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 61351 is not found' 00:27:25.684 20:26:20 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:27:25.684 00:27:25.684 real 0m19.690s 00:27:25.684 user 0m34.342s 00:27:25.684 sys 0m5.779s 00:27:25.684 ************************************ 00:27:25.684 END TEST cpu_locks 00:27:25.684 ************************************ 00:27:25.684 20:26:20 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:25.684 20:26:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:27:25.684 ************************************ 00:27:25.684 END TEST event 00:27:25.684 ************************************ 00:27:25.684 00:27:25.684 real 0m46.669s 00:27:25.684 user 1m30.182s 00:27:25.684 sys 0m9.631s 00:27:25.684 20:26:20 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:25.684 20:26:20 event -- common/autotest_common.sh@10 -- # set +x 00:27:25.684 20:26:20 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:27:25.684 20:26:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:25.684 20:26:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:25.684 20:26:20 -- common/autotest_common.sh@10 -- # set +x 00:27:25.684 ************************************ 00:27:25.684 START TEST thread 00:27:25.684 ************************************ 00:27:25.684 20:26:20 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:27:25.684 * Looking for test storage... 00:27:25.684 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:27:25.684 20:26:20 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:25.684 20:26:20 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:25.684 20:26:20 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:27:25.944 20:26:20 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:25.944 20:26:20 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:25.944 20:26:20 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:25.944 20:26:20 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:25.944 20:26:20 thread -- scripts/common.sh@336 -- # IFS=.-: 00:27:25.944 20:26:20 thread -- scripts/common.sh@336 -- # read -ra ver1 00:27:25.944 20:26:20 thread -- scripts/common.sh@337 -- # IFS=.-: 00:27:25.944 20:26:20 thread -- scripts/common.sh@337 -- # read -ra ver2 00:27:25.944 20:26:20 thread -- scripts/common.sh@338 -- # local 'op=<' 00:27:25.944 20:26:20 thread -- scripts/common.sh@340 -- # ver1_l=2 00:27:25.944 20:26:20 thread -- scripts/common.sh@341 -- # ver2_l=1 00:27:25.944 20:26:20 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:25.944 20:26:20 thread -- scripts/common.sh@344 -- # case "$op" in 00:27:25.944 20:26:20 thread -- scripts/common.sh@345 -- # : 1 00:27:25.944 20:26:20 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:25.944 20:26:20 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:25.944 20:26:20 thread -- scripts/common.sh@365 -- # decimal 1 00:27:25.944 20:26:20 thread -- scripts/common.sh@353 -- # local d=1 00:27:25.944 20:26:20 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:25.944 20:26:20 thread -- scripts/common.sh@355 -- # echo 1 00:27:25.944 20:26:20 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:27:25.944 20:26:20 thread -- scripts/common.sh@366 -- # decimal 2 00:27:25.944 20:26:21 thread -- scripts/common.sh@353 -- # local d=2 00:27:25.944 20:26:21 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:25.944 20:26:21 thread -- scripts/common.sh@355 -- # echo 2 00:27:25.944 20:26:21 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:27:25.944 20:26:21 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:25.944 20:26:21 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:25.944 20:26:21 thread -- scripts/common.sh@368 -- # return 0 00:27:25.944 20:26:21 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:25.944 20:26:21 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:25.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.944 --rc genhtml_branch_coverage=1 00:27:25.944 --rc genhtml_function_coverage=1 00:27:25.944 --rc genhtml_legend=1 00:27:25.944 --rc geninfo_all_blocks=1 00:27:25.944 --rc geninfo_unexecuted_blocks=1 00:27:25.944 00:27:25.944 ' 00:27:25.944 20:26:21 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:25.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.944 --rc genhtml_branch_coverage=1 00:27:25.944 --rc genhtml_function_coverage=1 00:27:25.944 --rc genhtml_legend=1 00:27:25.944 --rc geninfo_all_blocks=1 00:27:25.944 --rc geninfo_unexecuted_blocks=1 00:27:25.944 00:27:25.944 ' 00:27:25.944 20:26:21 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:25.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.944 --rc genhtml_branch_coverage=1 00:27:25.944 --rc genhtml_function_coverage=1 00:27:25.944 --rc genhtml_legend=1 00:27:25.944 --rc geninfo_all_blocks=1 00:27:25.944 --rc geninfo_unexecuted_blocks=1 00:27:25.944 00:27:25.944 ' 00:27:25.944 20:26:21 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:25.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.944 --rc genhtml_branch_coverage=1 00:27:25.944 --rc genhtml_function_coverage=1 00:27:25.944 --rc genhtml_legend=1 00:27:25.944 --rc geninfo_all_blocks=1 00:27:25.944 --rc geninfo_unexecuted_blocks=1 00:27:25.944 00:27:25.944 ' 00:27:25.944 20:26:21 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:27:25.944 20:26:21 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:27:25.944 20:26:21 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:25.944 20:26:21 thread -- common/autotest_common.sh@10 -- # set +x 00:27:25.944 ************************************ 00:27:25.944 START TEST thread_poller_perf 00:27:25.944 ************************************ 00:27:25.944 20:26:21 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:27:25.944 [2024-10-01 20:26:21.030027] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:27:25.944 [2024-10-01 20:26:21.030841] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61511 ] 00:27:25.944 [2024-10-01 20:26:21.167101] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.204 [2024-10-01 20:26:21.227203] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.204 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:27:27.141 ====================================== 00:27:27.141 busy:2207981986 (cyc) 00:27:27.141 total_run_count: 301000 00:27:27.141 tsc_hz: 2200000000 (cyc) 00:27:27.141 ====================================== 00:27:27.141 poller_cost: 7335 (cyc), 3334 (nsec) 00:27:27.141 00:27:27.141 real 0m1.293s 00:27:27.141 user 0m1.149s 00:27:27.141 sys 0m0.037s 00:27:27.141 20:26:22 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:27.141 20:26:22 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:27:27.141 ************************************ 00:27:27.141 END TEST thread_poller_perf 00:27:27.141 ************************************ 00:27:27.141 20:26:22 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:27:27.141 20:26:22 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:27:27.141 20:26:22 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:27.141 20:26:22 thread -- common/autotest_common.sh@10 -- # set +x 00:27:27.141 ************************************ 00:27:27.141 START TEST thread_poller_perf 00:27:27.141 ************************************ 00:27:27.141 20:26:22 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:27:27.141 [2024-10-01 20:26:22.382205] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:27:27.141 [2024-10-01 20:26:22.382563] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61541 ] 00:27:27.455 [2024-10-01 20:26:22.522619] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:27.455 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:27:27.455 [2024-10-01 20:26:22.590902] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:28.826 ====================================== 00:27:28.826 busy:2202618959 (cyc) 00:27:28.826 total_run_count: 3865000 00:27:28.826 tsc_hz: 2200000000 (cyc) 00:27:28.826 ====================================== 00:27:28.826 poller_cost: 569 (cyc), 258 (nsec) 00:27:28.826 ************************************ 00:27:28.826 END TEST thread_poller_perf 00:27:28.826 ************************************ 00:27:28.826 00:27:28.826 real 0m1.300s 00:27:28.826 user 0m1.148s 00:27:28.826 sys 0m0.045s 00:27:28.826 20:26:23 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:28.826 20:26:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:27:28.826 20:26:23 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:27:28.826 00:27:28.826 real 0m2.868s 00:27:28.826 user 0m2.427s 00:27:28.826 sys 0m0.216s 00:27:28.826 ************************************ 00:27:28.826 END TEST thread 00:27:28.826 ************************************ 00:27:28.826 20:26:23 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:28.826 20:26:23 thread -- common/autotest_common.sh@10 -- # set +x 00:27:28.826 20:26:23 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:27:28.826 20:26:23 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:27:28.826 20:26:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:28.826 20:26:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:28.826 20:26:23 -- common/autotest_common.sh@10 -- # set +x 00:27:28.826 ************************************ 00:27:28.826 START TEST app_cmdline 00:27:28.826 ************************************ 00:27:28.826 20:26:23 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:27:28.826 * Looking for test storage... 00:27:28.826 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:27:28.826 20:26:23 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:28.826 20:26:23 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:27:28.826 20:26:23 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:28.826 20:26:23 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:28.826 20:26:23 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:28.826 20:26:23 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:28.826 20:26:23 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:28.826 20:26:23 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:27:28.826 20:26:23 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:27:28.826 20:26:23 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:27:28.826 20:26:23 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:27:28.826 20:26:23 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:27:28.826 20:26:23 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:27:28.826 20:26:23 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:27:28.826 20:26:23 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:28.826 20:26:23 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:27:28.826 20:26:23 app_cmdline -- scripts/common.sh@345 -- # : 1 00:27:28.826 20:26:23 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:28.826 20:26:23 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:28.826 20:26:23 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:27:28.826 20:26:23 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:27:28.826 20:26:23 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:28.826 20:26:23 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:27:28.826 20:26:23 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:27:28.826 20:26:23 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:27:28.826 20:26:23 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:27:28.826 20:26:23 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:28.826 20:26:23 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:27:28.826 20:26:23 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:27:28.826 20:26:23 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:28.826 20:26:23 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:28.826 20:26:23 app_cmdline -- scripts/common.sh@368 -- # return 0 00:27:28.826 20:26:23 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:28.826 20:26:23 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:28.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.826 --rc genhtml_branch_coverage=1 00:27:28.826 --rc genhtml_function_coverage=1 00:27:28.826 --rc genhtml_legend=1 00:27:28.826 --rc geninfo_all_blocks=1 00:27:28.826 --rc geninfo_unexecuted_blocks=1 00:27:28.826 00:27:28.826 ' 00:27:28.826 20:26:23 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:28.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.826 --rc genhtml_branch_coverage=1 00:27:28.826 --rc genhtml_function_coverage=1 00:27:28.826 --rc genhtml_legend=1 00:27:28.826 --rc geninfo_all_blocks=1 00:27:28.826 --rc geninfo_unexecuted_blocks=1 00:27:28.826 00:27:28.826 ' 00:27:28.826 20:26:23 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:28.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.826 --rc genhtml_branch_coverage=1 00:27:28.826 --rc genhtml_function_coverage=1 00:27:28.826 --rc genhtml_legend=1 00:27:28.826 --rc geninfo_all_blocks=1 00:27:28.826 --rc geninfo_unexecuted_blocks=1 00:27:28.826 00:27:28.826 ' 00:27:28.826 20:26:23 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:28.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.826 --rc genhtml_branch_coverage=1 00:27:28.826 --rc genhtml_function_coverage=1 00:27:28.826 --rc genhtml_legend=1 00:27:28.826 --rc geninfo_all_blocks=1 00:27:28.826 --rc geninfo_unexecuted_blocks=1 00:27:28.826 00:27:28.826 ' 00:27:28.826 20:26:23 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:27:28.826 20:26:23 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61623 00:27:28.826 20:26:23 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:27:28.826 20:26:23 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61623 00:27:28.826 20:26:23 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 61623 ']' 00:27:28.826 20:26:23 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:28.826 20:26:23 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:28.826 20:26:23 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:28.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:28.826 20:26:23 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:28.826 20:26:23 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:27:28.826 [2024-10-01 20:26:24.001784] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:27:28.826 [2024-10-01 20:26:24.002141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61623 ] 00:27:29.083 [2024-10-01 20:26:24.142681] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.083 [2024-10-01 20:26:24.214252] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:30.015 20:26:25 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:30.015 20:26:25 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:27:30.015 20:26:25 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:27:30.273 { 00:27:30.273 "fields": { 00:27:30.273 "commit": "1b1c3081e", 00:27:30.273 "major": 25, 00:27:30.273 "minor": 1, 00:27:30.273 "patch": 0, 00:27:30.273 "suffix": "-pre" 00:27:30.273 }, 00:27:30.273 "version": "SPDK v25.01-pre git sha1 1b1c3081e" 00:27:30.273 } 00:27:30.273 20:26:25 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:27:30.273 20:26:25 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:27:30.273 20:26:25 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:27:30.273 20:26:25 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:27:30.273 20:26:25 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:27:30.273 20:26:25 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.273 20:26:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:27:30.273 20:26:25 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:27:30.273 20:26:25 app_cmdline -- app/cmdline.sh@26 -- # sort 00:27:30.273 20:26:25 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.273 20:26:25 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:27:30.273 20:26:25 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:27:30.273 20:26:25 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:27:30.273 20:26:25 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:27:30.273 20:26:25 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:27:30.273 20:26:25 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:30.273 20:26:25 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:30.273 20:26:25 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:30.273 20:26:25 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:30.273 20:26:25 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:30.274 20:26:25 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:30.274 20:26:25 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:30.274 20:26:25 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:27:30.274 20:26:25 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:27:30.532 2024/10/01 20:26:25 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:27:30.532 request: 00:27:30.532 { 00:27:30.532 "method": "env_dpdk_get_mem_stats", 00:27:30.532 "params": {} 00:27:30.532 } 00:27:30.532 Got JSON-RPC error response 00:27:30.532 GoRPCClient: error on JSON-RPC call 00:27:30.532 20:26:25 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:27:30.532 20:26:25 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:30.532 20:26:25 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:30.532 20:26:25 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:30.532 20:26:25 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61623 00:27:30.532 20:26:25 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 61623 ']' 00:27:30.532 20:26:25 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 61623 00:27:30.532 20:26:25 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:27:30.532 20:26:25 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:30.532 20:26:25 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61623 00:27:30.532 20:26:25 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:30.532 20:26:25 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:30.532 killing process with pid 61623 00:27:30.532 20:26:25 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61623' 00:27:30.532 20:26:25 app_cmdline -- common/autotest_common.sh@969 -- # kill 61623 00:27:30.532 20:26:25 app_cmdline -- common/autotest_common.sh@974 -- # wait 61623 00:27:31.103 00:27:31.103 real 0m2.350s 00:27:31.103 user 0m3.090s 00:27:31.103 sys 0m0.472s 00:27:31.103 20:26:26 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:31.103 20:26:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:27:31.103 ************************************ 00:27:31.103 END TEST app_cmdline 00:27:31.103 ************************************ 00:27:31.103 20:26:26 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:27:31.103 20:26:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:31.103 20:26:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:31.103 20:26:26 -- common/autotest_common.sh@10 -- # set +x 00:27:31.103 ************************************ 00:27:31.103 START TEST version 00:27:31.103 ************************************ 00:27:31.103 20:26:26 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:27:31.103 * Looking for test storage... 00:27:31.103 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:27:31.103 20:26:26 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:31.103 20:26:26 version -- common/autotest_common.sh@1681 -- # lcov --version 00:27:31.103 20:26:26 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:31.103 20:26:26 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:31.103 20:26:26 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:31.103 20:26:26 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:31.103 20:26:26 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:31.103 20:26:26 version -- scripts/common.sh@336 -- # IFS=.-: 00:27:31.103 20:26:26 version -- scripts/common.sh@336 -- # read -ra ver1 00:27:31.103 20:26:26 version -- scripts/common.sh@337 -- # IFS=.-: 00:27:31.103 20:26:26 version -- scripts/common.sh@337 -- # read -ra ver2 00:27:31.103 20:26:26 version -- scripts/common.sh@338 -- # local 'op=<' 00:27:31.103 20:26:26 version -- scripts/common.sh@340 -- # ver1_l=2 00:27:31.103 20:26:26 version -- scripts/common.sh@341 -- # ver2_l=1 00:27:31.103 20:26:26 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:31.103 20:26:26 version -- scripts/common.sh@344 -- # case "$op" in 00:27:31.103 20:26:26 version -- scripts/common.sh@345 -- # : 1 00:27:31.103 20:26:26 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:31.103 20:26:26 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:31.103 20:26:26 version -- scripts/common.sh@365 -- # decimal 1 00:27:31.103 20:26:26 version -- scripts/common.sh@353 -- # local d=1 00:27:31.103 20:26:26 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:31.103 20:26:26 version -- scripts/common.sh@355 -- # echo 1 00:27:31.103 20:26:26 version -- scripts/common.sh@365 -- # ver1[v]=1 00:27:31.103 20:26:26 version -- scripts/common.sh@366 -- # decimal 2 00:27:31.103 20:26:26 version -- scripts/common.sh@353 -- # local d=2 00:27:31.103 20:26:26 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:31.103 20:26:26 version -- scripts/common.sh@355 -- # echo 2 00:27:31.103 20:26:26 version -- scripts/common.sh@366 -- # ver2[v]=2 00:27:31.103 20:26:26 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:31.103 20:26:26 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:31.103 20:26:26 version -- scripts/common.sh@368 -- # return 0 00:27:31.103 20:26:26 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:31.103 20:26:26 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:31.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.103 --rc genhtml_branch_coverage=1 00:27:31.103 --rc genhtml_function_coverage=1 00:27:31.103 --rc genhtml_legend=1 00:27:31.103 --rc geninfo_all_blocks=1 00:27:31.103 --rc geninfo_unexecuted_blocks=1 00:27:31.103 00:27:31.103 ' 00:27:31.103 20:26:26 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:31.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.103 --rc genhtml_branch_coverage=1 00:27:31.103 --rc genhtml_function_coverage=1 00:27:31.103 --rc genhtml_legend=1 00:27:31.103 --rc geninfo_all_blocks=1 00:27:31.103 --rc geninfo_unexecuted_blocks=1 00:27:31.103 00:27:31.103 ' 00:27:31.103 20:26:26 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:31.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.103 --rc genhtml_branch_coverage=1 00:27:31.103 --rc genhtml_function_coverage=1 00:27:31.103 --rc genhtml_legend=1 00:27:31.103 --rc geninfo_all_blocks=1 00:27:31.103 --rc geninfo_unexecuted_blocks=1 00:27:31.103 00:27:31.103 ' 00:27:31.103 20:26:26 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:31.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.103 --rc genhtml_branch_coverage=1 00:27:31.103 --rc genhtml_function_coverage=1 00:27:31.103 --rc genhtml_legend=1 00:27:31.103 --rc geninfo_all_blocks=1 00:27:31.103 --rc geninfo_unexecuted_blocks=1 00:27:31.103 00:27:31.103 ' 00:27:31.103 20:26:26 version -- app/version.sh@17 -- # get_header_version major 00:27:31.103 20:26:26 version -- app/version.sh@14 -- # tr -d '"' 00:27:31.103 20:26:26 version -- app/version.sh@14 -- # cut -f2 00:27:31.103 20:26:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:27:31.103 20:26:26 version -- app/version.sh@17 -- # major=25 00:27:31.103 20:26:26 version -- app/version.sh@18 -- # get_header_version minor 00:27:31.103 20:26:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:27:31.103 20:26:26 version -- app/version.sh@14 -- # tr -d '"' 00:27:31.103 20:26:26 version -- app/version.sh@14 -- # cut -f2 00:27:31.103 20:26:26 version -- app/version.sh@18 -- # minor=1 00:27:31.103 20:26:26 version -- app/version.sh@19 -- # get_header_version patch 00:27:31.103 20:26:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:27:31.103 20:26:26 version -- app/version.sh@14 -- # cut -f2 00:27:31.103 20:26:26 version -- app/version.sh@14 -- # tr -d '"' 00:27:31.103 20:26:26 version -- app/version.sh@19 -- # patch=0 00:27:31.103 20:26:26 version -- app/version.sh@20 -- # get_header_version suffix 00:27:31.103 20:26:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:27:31.103 20:26:26 version -- app/version.sh@14 -- # cut -f2 00:27:31.103 20:26:26 version -- app/version.sh@14 -- # tr -d '"' 00:27:31.103 20:26:26 version -- app/version.sh@20 -- # suffix=-pre 00:27:31.103 20:26:26 version -- app/version.sh@22 -- # version=25.1 00:27:31.103 20:26:26 version -- app/version.sh@25 -- # (( patch != 0 )) 00:27:31.103 20:26:26 version -- app/version.sh@28 -- # version=25.1rc0 00:27:31.103 20:26:26 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:27:31.103 20:26:26 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:27:31.361 20:26:26 version -- app/version.sh@30 -- # py_version=25.1rc0 00:27:31.361 20:26:26 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:27:31.361 00:27:31.361 real 0m0.235s 00:27:31.361 user 0m0.154s 00:27:31.362 sys 0m0.115s 00:27:31.362 20:26:26 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:31.362 20:26:26 version -- common/autotest_common.sh@10 -- # set +x 00:27:31.362 ************************************ 00:27:31.362 END TEST version 00:27:31.362 ************************************ 00:27:31.362 20:26:26 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:27:31.362 20:26:26 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:27:31.362 20:26:26 -- spdk/autotest.sh@194 -- # uname -s 00:27:31.362 20:26:26 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:27:31.362 20:26:26 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:27:31.362 20:26:26 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:27:31.362 20:26:26 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:27:31.362 20:26:26 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:27:31.362 20:26:26 -- spdk/autotest.sh@256 -- # timing_exit lib 00:27:31.362 20:26:26 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:31.362 20:26:26 -- common/autotest_common.sh@10 -- # set +x 00:27:31.362 20:26:26 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:27:31.362 20:26:26 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:27:31.362 20:26:26 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:27:31.362 20:26:26 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:27:31.362 20:26:26 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:27:31.362 20:26:26 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:27:31.362 20:26:26 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:27:31.362 20:26:26 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:31.362 20:26:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:31.362 20:26:26 -- common/autotest_common.sh@10 -- # set +x 00:27:31.362 ************************************ 00:27:31.362 START TEST nvmf_tcp 00:27:31.362 ************************************ 00:27:31.362 20:26:26 nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:27:31.362 * Looking for test storage... 00:27:31.362 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:27:31.362 20:26:26 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:31.362 20:26:26 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:27:31.362 20:26:26 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:31.620 20:26:26 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:31.620 20:26:26 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:31.620 20:26:26 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:31.620 20:26:26 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:31.620 20:26:26 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:27:31.620 20:26:26 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:27:31.620 20:26:26 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:27:31.620 20:26:26 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:27:31.620 20:26:26 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:27:31.620 20:26:26 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:27:31.620 20:26:26 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:27:31.620 20:26:26 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:31.620 20:26:26 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:27:31.620 20:26:26 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:27:31.620 20:26:26 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:31.620 20:26:26 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:31.620 20:26:26 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:27:31.620 20:26:26 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:27:31.620 20:26:26 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:31.620 20:26:26 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:27:31.620 20:26:26 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:27:31.620 20:26:26 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:27:31.620 20:26:26 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:27:31.620 20:26:26 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:31.620 20:26:26 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:27:31.620 20:26:26 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:27:31.620 20:26:26 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:31.620 20:26:26 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:31.620 20:26:26 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:27:31.620 20:26:26 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:31.620 20:26:26 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:31.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.620 --rc genhtml_branch_coverage=1 00:27:31.620 --rc genhtml_function_coverage=1 00:27:31.620 --rc genhtml_legend=1 00:27:31.620 --rc geninfo_all_blocks=1 00:27:31.620 --rc geninfo_unexecuted_blocks=1 00:27:31.620 00:27:31.620 ' 00:27:31.621 20:26:26 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:31.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.621 --rc genhtml_branch_coverage=1 00:27:31.621 --rc genhtml_function_coverage=1 00:27:31.621 --rc genhtml_legend=1 00:27:31.621 --rc geninfo_all_blocks=1 00:27:31.621 --rc geninfo_unexecuted_blocks=1 00:27:31.621 00:27:31.621 ' 00:27:31.621 20:26:26 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:31.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.621 --rc genhtml_branch_coverage=1 00:27:31.621 --rc genhtml_function_coverage=1 00:27:31.621 --rc genhtml_legend=1 00:27:31.621 --rc geninfo_all_blocks=1 00:27:31.621 --rc geninfo_unexecuted_blocks=1 00:27:31.621 00:27:31.621 ' 00:27:31.621 20:26:26 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:31.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.621 --rc genhtml_branch_coverage=1 00:27:31.621 --rc genhtml_function_coverage=1 00:27:31.621 --rc genhtml_legend=1 00:27:31.621 --rc geninfo_all_blocks=1 00:27:31.621 --rc geninfo_unexecuted_blocks=1 00:27:31.621 00:27:31.621 ' 00:27:31.621 20:26:26 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:27:31.621 20:26:26 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:27:31.621 20:26:26 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:27:31.621 20:26:26 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:31.621 20:26:26 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:31.621 20:26:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:31.621 ************************************ 00:27:31.621 START TEST nvmf_target_core 00:27:31.621 ************************************ 00:27:31.621 20:26:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:27:31.621 * Looking for test storage... 00:27:31.621 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:27:31.621 20:26:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:31.621 20:26:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:31.621 20:26:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:27:31.621 20:26:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:31.621 20:26:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:31.621 20:26:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:31.621 20:26:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:31.621 20:26:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:27:31.621 20:26:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:27:31.621 20:26:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:27:31.621 20:26:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:27:31.621 20:26:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:27:31.621 20:26:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:27:31.621 20:26:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:27:31.621 20:26:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:31.621 20:26:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:27:31.621 20:26:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:27:31.621 20:26:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:31.621 20:26:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:31.621 20:26:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:27:31.621 20:26:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:27:31.621 20:26:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:31.621 20:26:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:27:31.621 20:26:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:27:31.621 20:26:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:27:31.879 20:26:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:27:31.879 20:26:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:31.879 20:26:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:27:31.879 20:26:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:27:31.879 20:26:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:31.879 20:26:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:31.879 20:26:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:27:31.879 20:26:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:31.879 20:26:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:31.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.879 --rc genhtml_branch_coverage=1 00:27:31.879 --rc genhtml_function_coverage=1 00:27:31.879 --rc genhtml_legend=1 00:27:31.879 --rc geninfo_all_blocks=1 00:27:31.879 --rc geninfo_unexecuted_blocks=1 00:27:31.879 00:27:31.879 ' 00:27:31.879 20:26:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:31.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.879 --rc genhtml_branch_coverage=1 00:27:31.879 --rc genhtml_function_coverage=1 00:27:31.879 --rc genhtml_legend=1 00:27:31.879 --rc geninfo_all_blocks=1 00:27:31.879 --rc geninfo_unexecuted_blocks=1 00:27:31.879 00:27:31.879 ' 00:27:31.879 20:26:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:31.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.879 --rc genhtml_branch_coverage=1 00:27:31.879 --rc genhtml_function_coverage=1 00:27:31.879 --rc genhtml_legend=1 00:27:31.879 --rc geninfo_all_blocks=1 00:27:31.879 --rc geninfo_unexecuted_blocks=1 00:27:31.879 00:27:31.879 ' 00:27:31.879 20:26:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:31.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.879 --rc genhtml_branch_coverage=1 00:27:31.879 --rc genhtml_function_coverage=1 00:27:31.879 --rc genhtml_legend=1 00:27:31.879 --rc geninfo_all_blocks=1 00:27:31.879 --rc geninfo_unexecuted_blocks=1 00:27:31.879 00:27:31.879 ' 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:31.880 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:27:31.880 ************************************ 00:27:31.880 START TEST nvmf_abort 00:27:31.880 ************************************ 00:27:31.880 20:26:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:27:31.880 * Looking for test storage... 00:27:31.880 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:31.880 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:31.880 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:27:31.880 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:31.880 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:31.880 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:31.880 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:31.880 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:31.880 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:27:31.880 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:27:31.880 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:27:31.880 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:27:31.880 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:27:31.880 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:27:31.880 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:27:31.880 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:31.880 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:27:31.880 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:27:31.880 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:31.880 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:31.880 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:27:31.880 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:27:31.880 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:31.880 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:27:31.880 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:27:31.880 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:27:31.880 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:27:31.880 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:31.880 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:27:31.880 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:27:31.880 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:31.880 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:31.880 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:27:31.880 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:31.880 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:31.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.880 --rc genhtml_branch_coverage=1 00:27:31.880 --rc genhtml_function_coverage=1 00:27:31.880 --rc genhtml_legend=1 00:27:31.880 --rc geninfo_all_blocks=1 00:27:31.880 --rc geninfo_unexecuted_blocks=1 00:27:31.880 00:27:31.880 ' 00:27:31.880 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:31.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.880 --rc genhtml_branch_coverage=1 00:27:31.880 --rc genhtml_function_coverage=1 00:27:31.880 --rc genhtml_legend=1 00:27:31.880 --rc geninfo_all_blocks=1 00:27:31.880 --rc geninfo_unexecuted_blocks=1 00:27:31.880 00:27:31.880 ' 00:27:31.880 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:31.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.880 --rc genhtml_branch_coverage=1 00:27:31.880 --rc genhtml_function_coverage=1 00:27:31.880 --rc genhtml_legend=1 00:27:31.880 --rc geninfo_all_blocks=1 00:27:31.880 --rc geninfo_unexecuted_blocks=1 00:27:31.880 00:27:31.880 ' 00:27:31.880 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:31.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.881 --rc genhtml_branch_coverage=1 00:27:31.881 --rc genhtml_function_coverage=1 00:27:31.881 --rc genhtml_legend=1 00:27:31.881 --rc geninfo_all_blocks=1 00:27:31.881 --rc geninfo_unexecuted_blocks=1 00:27:31.881 00:27:31.881 ' 00:27:31.881 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:31.881 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:27:31.881 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:31.881 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:31.881 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:31.881 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:31.881 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:31.881 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:31.881 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:31.881 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:31.881 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:31.881 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:31.881 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:27:31.881 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:27:31.881 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:31.881 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:31.881 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:31.881 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:31.881 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:31.881 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:27:31.881 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:31.881 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:31.881 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:31.881 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.881 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.881 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.881 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:27:31.881 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.881 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:27:31.881 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:31.881 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:31.881 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:31.881 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:31.881 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:31.881 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:31.881 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:31.881 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:31.881 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:31.881 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:32.139 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@458 -- # nvmf_veth_init 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:32.140 Cannot find device "nvmf_init_br" 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # true 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:32.140 Cannot find device "nvmf_init_br2" 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # true 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:32.140 Cannot find device "nvmf_tgt_br" 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # true 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:32.140 Cannot find device "nvmf_tgt_br2" 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # true 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:32.140 Cannot find device "nvmf_init_br" 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # true 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:32.140 Cannot find device "nvmf_init_br2" 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # true 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:32.140 Cannot find device "nvmf_tgt_br" 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # true 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:32.140 Cannot find device "nvmf_tgt_br2" 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # true 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:32.140 Cannot find device "nvmf_br" 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # true 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:32.140 Cannot find device "nvmf_init_if" 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # true 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:32.140 Cannot find device "nvmf_init_if2" 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # true 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:32.140 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # true 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:32.140 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # true 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:32.140 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:32.398 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:32.398 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:32.398 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:32.398 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:32.398 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:32.398 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:32.398 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:32.398 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:32.398 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:32.398 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:32.398 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:32.398 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:32.398 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:32.398 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:32.656 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:32.656 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:32.656 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:32.656 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:32.656 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:32.656 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:32.656 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:27:32.656 00:27:32.656 --- 10.0.0.3 ping statistics --- 00:27:32.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.656 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:27:32.656 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:32.656 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:32.656 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.032 ms 00:27:32.656 00:27:32.656 --- 10.0.0.4 ping statistics --- 00:27:32.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.656 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:27:32.656 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:32.656 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:32.656 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:27:32.656 00:27:32.656 --- 10.0.0.1 ping statistics --- 00:27:32.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.656 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:27:32.656 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:32.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:32.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.038 ms 00:27:32.656 00:27:32.656 --- 10.0.0.2 ping statistics --- 00:27:32.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.656 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:27:32.656 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:32.656 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@459 -- # return 0 00:27:32.656 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:32.656 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:32.656 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:32.656 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:32.656 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:32.656 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:32.656 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:32.656 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:27:32.656 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:32.656 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:32.656 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:32.656 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=62067 00:27:32.656 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 62067 00:27:32.656 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:32.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:32.656 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 62067 ']' 00:27:32.656 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:32.656 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:32.656 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:32.656 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:32.656 20:26:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:32.656 [2024-10-01 20:26:27.787639] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:27:32.656 [2024-10-01 20:26:27.787738] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:32.914 [2024-10-01 20:26:27.947213] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:32.914 [2024-10-01 20:26:28.036173] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:32.914 [2024-10-01 20:26:28.036612] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:32.914 [2024-10-01 20:26:28.036898] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:32.914 [2024-10-01 20:26:28.037249] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:32.914 [2024-10-01 20:26:28.037478] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:32.914 [2024-10-01 20:26:28.037850] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:27:32.914 [2024-10-01 20:26:28.037987] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:27:32.914 [2024-10-01 20:26:28.038177] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:33.172 20:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:33.172 20:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:27:33.172 20:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:33.172 20:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:33.172 20:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.172 20:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:33.172 20:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:27:33.172 20:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.172 20:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.172 [2024-10-01 20:26:28.242891] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:33.172 20:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.172 20:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:27:33.173 20:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.173 20:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.173 Malloc0 00:27:33.173 20:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.173 20:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:33.173 20:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.173 20:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.173 Delay0 00:27:33.173 20:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.173 20:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:33.173 20:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.173 20:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.173 20:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.173 20:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:27:33.173 20:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.173 20:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.173 20:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.173 20:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:33.173 20:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.173 20:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.173 [2024-10-01 20:26:28.299703] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:33.173 20:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.173 20:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:27:33.173 20:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.173 20:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.173 20:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.173 20:26:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:27:33.431 [2024-10-01 20:26:28.470557] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:27:35.330 Initializing NVMe Controllers 00:27:35.330 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:27:35.330 controller IO queue size 128 less than required 00:27:35.330 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:27:35.330 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:27:35.330 Initialization complete. Launching workers. 00:27:35.330 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 27680 00:27:35.330 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27741, failed to submit 62 00:27:35.330 success 27680, unsuccessful 61, failed 0 00:27:35.330 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:35.330 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.330 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:35.330 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.330 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:27:35.330 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:27:35.330 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:35.330 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:27:35.330 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:35.330 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:27:35.330 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:35.330 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:35.330 rmmod nvme_tcp 00:27:35.330 rmmod nvme_fabrics 00:27:35.589 rmmod nvme_keyring 00:27:35.589 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:35.589 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:27:35.589 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:27:35.589 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 62067 ']' 00:27:35.589 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 62067 00:27:35.589 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 62067 ']' 00:27:35.589 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 62067 00:27:35.589 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:27:35.589 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:35.589 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62067 00:27:35.589 killing process with pid 62067 00:27:35.589 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:35.589 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:35.589 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62067' 00:27:35.589 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 62067 00:27:35.589 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 62067 00:27:35.848 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:35.848 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:35.848 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:35.848 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:27:35.848 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:35.848 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:27:35.848 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:27:35.848 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:35.848 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:35.848 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:35.848 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:35.848 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:35.849 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:35.849 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:35.849 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:35.849 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:35.849 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:35.849 20:26:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:35.849 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:35.849 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:35.849 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:35.849 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:35.849 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:35.849 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.849 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:35.849 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.108 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@300 -- # return 0 00:27:36.108 00:27:36.108 real 0m4.197s 00:27:36.108 user 0m10.459s 00:27:36.108 sys 0m1.043s 00:27:36.108 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:36.108 ************************************ 00:27:36.108 END TEST nvmf_abort 00:27:36.108 ************************************ 00:27:36.108 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:36.108 20:26:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:27:36.108 20:26:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:36.108 20:26:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:36.108 20:26:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:27:36.108 ************************************ 00:27:36.108 START TEST nvmf_ns_hotplug_stress 00:27:36.108 ************************************ 00:27:36.108 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:27:36.108 * Looking for test storage... 00:27:36.108 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:36.108 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:36.108 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:36.108 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:27:36.108 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:36.108 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:36.108 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:36.108 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:36.108 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:27:36.108 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:27:36.108 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:27:36.108 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:27:36.108 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:27:36.108 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:27:36.108 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:27:36.108 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:36.108 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:27:36.108 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:27:36.108 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:36.108 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:36.108 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:27:36.108 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:27:36.108 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:36.109 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:27:36.109 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:27:36.109 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:27:36.109 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:27:36.109 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:36.109 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:27:36.372 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:27:36.372 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:36.372 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:36.372 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:27:36.372 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:36.372 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:36.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.372 --rc genhtml_branch_coverage=1 00:27:36.372 --rc genhtml_function_coverage=1 00:27:36.372 --rc genhtml_legend=1 00:27:36.372 --rc geninfo_all_blocks=1 00:27:36.372 --rc geninfo_unexecuted_blocks=1 00:27:36.372 00:27:36.372 ' 00:27:36.372 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:36.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.372 --rc genhtml_branch_coverage=1 00:27:36.372 --rc genhtml_function_coverage=1 00:27:36.372 --rc genhtml_legend=1 00:27:36.372 --rc geninfo_all_blocks=1 00:27:36.372 --rc geninfo_unexecuted_blocks=1 00:27:36.372 00:27:36.372 ' 00:27:36.372 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:36.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.372 --rc genhtml_branch_coverage=1 00:27:36.372 --rc genhtml_function_coverage=1 00:27:36.372 --rc genhtml_legend=1 00:27:36.372 --rc geninfo_all_blocks=1 00:27:36.372 --rc geninfo_unexecuted_blocks=1 00:27:36.372 00:27:36.372 ' 00:27:36.372 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:36.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.372 --rc genhtml_branch_coverage=1 00:27:36.372 --rc genhtml_function_coverage=1 00:27:36.372 --rc genhtml_legend=1 00:27:36.372 --rc geninfo_all_blocks=1 00:27:36.372 --rc geninfo_unexecuted_blocks=1 00:27:36.372 00:27:36.372 ' 00:27:36.372 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:36.372 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:27:36.372 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:36.372 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:36.372 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:36.372 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:36.372 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:36.372 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:36.372 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:36.372 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:36.372 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:36.372 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:36.372 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:27:36.372 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:27:36.372 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:36.372 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:36.372 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:36.372 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:36.372 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:36.372 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:27:36.372 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:36.372 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:36.372 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:36.372 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.372 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.372 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.372 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:36.373 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # nvmf_veth_init 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:36.373 Cannot find device "nvmf_init_br" 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:36.373 Cannot find device "nvmf_init_br2" 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:36.373 Cannot find device "nvmf_tgt_br" 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:36.373 Cannot find device "nvmf_tgt_br2" 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:36.373 Cannot find device "nvmf_init_br" 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:36.373 Cannot find device "nvmf_init_br2" 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:36.373 Cannot find device "nvmf_tgt_br" 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:36.373 Cannot find device "nvmf_tgt_br2" 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:36.373 Cannot find device "nvmf_br" 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:36.373 Cannot find device "nvmf_init_if" 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:36.373 Cannot find device "nvmf_init_if2" 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:36.373 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:36.373 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:36.373 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:36.632 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:36.632 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:27:36.632 00:27:36.632 --- 10.0.0.3 ping statistics --- 00:27:36.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.632 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:36.632 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:36.632 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.037 ms 00:27:36.632 00:27:36.632 --- 10.0.0.4 ping statistics --- 00:27:36.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.632 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:36.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:36.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:27:36.632 00:27:36.632 --- 10.0.0.1 ping statistics --- 00:27:36.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.632 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:36.632 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:36.632 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:27:36.632 00:27:36.632 --- 10.0.0.2 ping statistics --- 00:27:36.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.632 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # return 0 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:36.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=62348 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 62348 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 62348 ']' 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:36.632 20:26:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:36.632 [2024-10-01 20:26:31.862647] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:27:36.632 [2024-10-01 20:26:31.862923] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:36.891 [2024-10-01 20:26:32.000002] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:36.891 [2024-10-01 20:26:32.082328] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:36.891 [2024-10-01 20:26:32.082646] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:36.891 [2024-10-01 20:26:32.082886] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:36.891 [2024-10-01 20:26:32.083214] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:36.891 [2024-10-01 20:26:32.083393] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:36.891 [2024-10-01 20:26:32.083692] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:27:36.891 [2024-10-01 20:26:32.083760] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:27:36.891 [2024-10-01 20:26:32.083770] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.825 20:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:37.825 20:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:27:37.825 20:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:37.825 20:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:37.825 20:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:37.825 20:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:37.825 20:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:27:37.825 20:26:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:38.119 [2024-10-01 20:26:33.198039] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:38.119 20:26:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:38.400 20:26:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:38.659 [2024-10-01 20:26:33.746619] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:38.659 20:26:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:27:38.917 20:26:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:27:39.484 Malloc0 00:27:39.484 20:26:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:39.484 Delay0 00:27:39.484 20:26:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:40.052 20:26:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:27:40.052 NULL1 00:27:40.052 20:26:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:27:40.312 20:26:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=62490 00:27:40.312 20:26:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:27:40.312 20:26:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62490 00:27:40.312 20:26:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:41.685 Read completed with error (sct=0, sc=11) 00:27:41.685 20:26:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:41.685 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:41.685 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:41.685 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:41.943 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:41.943 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:41.943 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:41.943 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:41.943 20:26:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:27:41.943 20:26:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:27:42.509 true 00:27:42.509 20:26:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62490 00:27:42.509 20:26:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:43.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:43.075 20:26:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:43.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:43.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:43.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:43.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:43.334 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:43.334 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:43.334 20:26:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:27:43.334 20:26:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:27:43.592 true 00:27:43.592 20:26:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62490 00:27:43.592 20:26:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:44.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:44.526 20:26:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:44.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:44.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:44.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:44.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:44.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:44.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:44.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:44.785 20:26:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:27:44.785 20:26:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:27:45.042 true 00:27:45.042 20:26:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62490 00:27:45.043 20:26:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:45.299 20:26:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:45.590 20:26:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:27:45.590 20:26:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:27:45.849 true 00:27:45.849 20:26:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62490 00:27:45.849 20:26:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:46.783 20:26:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:47.042 20:26:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:27:47.042 20:26:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:27:47.300 true 00:27:47.300 20:26:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62490 00:27:47.300 20:26:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:47.558 20:26:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:47.817 20:26:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:27:47.817 20:26:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:27:48.075 true 00:27:48.075 20:26:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62490 00:27:48.075 20:26:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:48.333 20:26:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:48.898 20:26:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:27:48.898 20:26:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:27:49.155 true 00:27:49.155 20:26:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62490 00:27:49.155 20:26:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:49.413 20:26:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:49.979 20:26:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:27:49.979 20:26:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:27:50.236 true 00:27:50.236 20:26:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62490 00:27:50.236 20:26:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:50.493 20:26:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:50.750 20:26:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:27:50.750 20:26:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:27:51.008 true 00:27:51.266 20:26:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62490 00:27:51.266 20:26:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:51.523 20:26:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:51.781 20:26:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:27:51.781 20:26:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:27:52.038 true 00:27:52.038 20:26:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62490 00:27:52.038 20:26:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:52.296 20:26:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:52.553 20:26:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:27:52.553 20:26:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:27:52.811 true 00:27:52.811 20:26:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62490 00:27:52.811 20:26:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:53.774 20:26:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:54.032 20:26:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:27:54.032 20:26:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:27:54.290 true 00:27:54.290 20:26:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62490 00:27:54.290 20:26:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:54.548 20:26:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:54.807 20:26:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:27:54.807 20:26:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:27:55.065 true 00:27:55.065 20:26:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62490 00:27:55.065 20:26:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:55.323 20:26:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:55.581 20:26:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:27:55.581 20:26:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:27:55.839 true 00:27:55.839 20:26:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62490 00:27:55.839 20:26:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:56.774 20:26:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:57.033 20:26:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:27:57.033 20:26:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:27:57.291 true 00:27:57.291 20:26:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62490 00:27:57.291 20:26:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:57.549 20:26:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:57.807 20:26:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:27:57.807 20:26:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:27:58.064 true 00:27:58.064 20:26:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62490 00:27:58.065 20:26:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:58.322 20:26:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:58.580 20:26:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:27:58.580 20:26:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:27:58.839 true 00:27:58.839 20:26:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62490 00:27:58.839 20:26:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:59.776 20:26:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:00.034 20:26:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:28:00.034 20:26:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:28:00.600 true 00:28:00.600 20:26:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62490 00:28:00.600 20:26:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:01.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:01.981 20:26:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:01.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:01.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:01.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:01.982 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:01.982 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:01.982 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:01.982 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:01.982 20:26:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:28:01.982 20:26:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:28:02.241 true 00:28:02.241 20:26:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62490 00:28:02.241 20:26:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:03.179 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:03.179 20:26:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:03.179 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:03.179 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:03.179 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:03.179 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:03.179 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:03.179 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:03.437 20:26:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:28:03.437 20:26:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:28:03.695 true 00:28:03.695 20:26:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62490 00:28:03.695 20:26:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:04.271 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:04.271 20:26:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:04.271 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:04.529 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:04.529 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:04.529 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:04.529 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:04.529 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:04.529 20:26:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:28:04.529 20:26:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:28:05.094 true 00:28:05.094 20:27:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62490 00:28:05.094 20:27:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:05.659 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:05.659 20:27:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:05.916 20:27:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:28:05.917 20:27:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:28:06.174 true 00:28:06.174 20:27:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62490 00:28:06.174 20:27:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:06.739 20:27:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:06.739 20:27:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:28:06.739 20:27:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:28:07.307 true 00:28:07.307 20:27:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62490 00:28:07.307 20:27:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:07.307 20:27:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:07.873 20:27:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:28:07.873 20:27:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:28:07.873 true 00:28:08.131 20:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62490 00:28:08.131 20:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:08.696 20:27:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:08.954 20:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:28:08.954 20:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:28:09.213 true 00:28:09.213 20:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62490 00:28:09.213 20:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:09.524 20:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:09.782 20:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:28:09.782 20:27:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:28:10.041 true 00:28:10.041 20:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62490 00:28:10.041 20:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:10.607 20:27:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:10.607 Initializing NVMe Controllers 00:28:10.607 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:28:10.607 Controller IO queue size 128, less than required. 00:28:10.607 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:10.607 Controller IO queue size 128, less than required. 00:28:10.607 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:10.607 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:10.607 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:10.607 Initialization complete. Launching workers. 00:28:10.607 ======================================================== 00:28:10.607 Latency(us) 00:28:10.607 Device Information : IOPS MiB/s Average min max 00:28:10.607 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1578.31 0.77 39257.44 3420.30 1027529.69 00:28:10.607 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9300.05 4.54 13764.00 3186.43 669644.35 00:28:10.607 ======================================================== 00:28:10.607 Total : 10878.36 5.31 17462.77 3186.43 1027529.69 00:28:10.607 00:28:10.865 20:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:28:10.865 20:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:28:11.124 true 00:28:11.124 20:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62490 00:28:11.124 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (62490) - No such process 00:28:11.124 20:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 62490 00:28:11.124 20:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:11.382 20:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:11.639 20:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:28:11.639 20:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:28:11.639 20:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:28:11.639 20:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:11.639 20:27:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:28:11.897 null0 00:28:11.897 20:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:11.897 20:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:11.897 20:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:28:12.156 null1 00:28:12.156 20:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:12.156 20:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:12.156 20:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:28:12.415 null2 00:28:12.415 20:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:12.415 20:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:12.415 20:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:28:12.997 null3 00:28:12.997 20:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:12.997 20:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:12.997 20:27:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:28:13.262 null4 00:28:13.262 20:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:13.262 20:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:13.262 20:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:28:13.521 null5 00:28:13.521 20:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:13.521 20:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:13.521 20:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:28:13.779 null6 00:28:13.779 20:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:13.779 20:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:13.779 20:27:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:28:14.038 null7 00:28:14.038 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:14.038 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:14.038 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:28:14.038 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:14.038 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:14.038 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:28:14.038 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:14.038 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:28:14.038 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:14.038 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:14.038 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.038 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:14.038 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:14.038 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:28:14.038 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:28:14.038 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:14.038 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:14.038 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:14.038 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.038 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:14.038 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:14.038 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:28:14.038 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:14.038 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:14.038 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:14.038 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:28:14.038 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:14.038 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:14.038 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:28:14.038 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:14.039 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:28:14.039 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:14.039 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:14.039 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:28:14.039 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:28:14.039 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:14.039 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:14.039 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.039 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.039 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:14.039 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:14.039 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.039 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:14.039 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:14.039 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:14.039 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:28:14.039 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:14.039 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:14.039 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:28:14.039 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:14.039 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.039 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:14.039 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:28:14.039 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:28:14.039 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:14.039 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:14.039 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:14.039 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.039 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:14.039 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:14.039 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:14.039 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:14.039 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:14.039 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 63497 63498 63500 63502 63503 63504 63508 63510 00:28:14.039 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:28:14.039 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:28:14.039 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:14.039 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.039 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:14.297 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:14.297 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:14.555 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:14.555 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:14.555 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:14.555 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:14.555 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:14.555 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:14.555 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.555 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.555 20:27:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:14.813 20:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:14.813 20:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:14.813 20:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:14.813 20:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:15.071 20:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:15.071 20:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.071 20:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:15.071 20:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:15.071 20:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.071 20:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:15.071 20:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:15.071 20:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.071 20:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:15.071 20:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:15.071 20:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.071 20:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:15.071 20:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:15.071 20:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.071 20:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:15.329 20:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:15.329 20:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.329 20:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:15.329 20:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:15.329 20:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.329 20:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:15.329 20:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:15.329 20:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:15.587 20:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:15.587 20:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:15.587 20:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:15.587 20:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:15.587 20:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:15.587 20:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:15.845 20:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:15.845 20:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.845 20:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:15.845 20:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:15.845 20:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:15.845 20:27:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:16.103 20:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.103 20:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.103 20:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:16.103 20:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.103 20:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.103 20:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:16.103 20:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.103 20:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.103 20:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:16.103 20:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.103 20:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.104 20:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:16.104 20:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.104 20:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.104 20:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:16.361 20:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.361 20:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.361 20:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:16.362 20:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:16.362 20:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:16.362 20:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:16.620 20:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:16.620 20:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:16.620 20:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:16.620 20:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:16.882 20:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:16.882 20:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.882 20:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.882 20:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:16.882 20:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.882 20:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.882 20:27:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:16.882 20:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.882 20:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.882 20:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:16.882 20:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.882 20:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.882 20:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:16.882 20:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:16.882 20:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.882 20:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:17.139 20:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.139 20:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.139 20:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:17.139 20:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.139 20:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.139 20:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:17.139 20:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.139 20:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.139 20:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:17.396 20:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:17.396 20:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:17.396 20:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:17.396 20:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:17.396 20:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:17.654 20:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:17.654 20:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:17.654 20:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:17.654 20:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.654 20:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.654 20:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:17.654 20:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.654 20:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.654 20:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:17.911 20:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.911 20:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.911 20:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:17.911 20:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.911 20:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.911 20:27:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:17.911 20:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.911 20:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.911 20:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:18.168 20:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.168 20:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.168 20:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:18.168 20:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.168 20:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.168 20:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:18.168 20:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:18.168 20:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.168 20:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.168 20:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:18.168 20:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:18.425 20:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:18.425 20:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:18.425 20:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:18.425 20:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:18.682 20:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.682 20:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.682 20:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:18.682 20:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:18.682 20:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:18.682 20:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.682 20:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.682 20:27:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:18.938 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:18.938 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.938 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.938 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:18.938 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.938 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.938 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:18.938 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.938 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.938 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.938 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:18.938 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.938 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:19.195 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.195 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.195 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:19.195 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:19.195 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:19.195 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.195 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.195 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:19.195 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:19.195 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:19.453 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.453 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.453 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:19.453 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:19.453 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:19.453 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.453 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.453 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:19.711 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.711 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.711 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:19.711 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:19.711 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:19.711 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.711 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.711 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:19.711 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.711 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.711 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:19.969 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:19.969 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.969 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.969 20:27:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:19.969 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.969 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.969 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:19.969 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.969 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.969 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:19.969 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.969 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.969 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:19.969 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:20.227 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:20.227 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:20.227 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:20.227 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.227 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.227 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:20.227 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:20.227 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:20.227 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:20.485 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.485 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.485 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:20.485 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.485 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.485 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:20.742 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.742 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.742 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:20.742 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.742 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.742 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:20.742 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.742 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.742 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:20.742 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.742 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.742 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:20.742 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:20.742 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:20.742 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.742 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.742 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:20.742 20:27:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:21.000 20:27:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.000 20:27:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.000 20:27:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:21.000 20:27:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:21.257 20:27:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:21.257 20:27:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:21.257 20:27:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:21.257 20:27:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.257 20:27:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.257 20:27:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:21.257 20:27:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.257 20:27:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.257 20:27:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:21.257 20:27:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.257 20:27:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.257 20:27:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:21.257 20:27:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:21.514 20:27:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.514 20:27:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.514 20:27:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:21.514 20:27:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:21.514 20:27:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.514 20:27:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.514 20:27:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:21.514 20:27:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:21.772 20:27:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:21.772 20:27:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.772 20:27:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.772 20:27:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:21.772 20:27:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.772 20:27:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.772 20:27:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:21.772 20:27:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:21.772 20:27:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.772 20:27:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.772 20:27:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:22.030 20:27:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.030 20:27:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.030 20:27:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:22.030 20:27:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:22.030 20:27:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.030 20:27:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.030 20:27:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:22.030 20:27:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:22.288 20:27:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:22.289 20:27:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:22.289 20:27:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:22.289 20:27:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.289 20:27:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.289 20:27:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:22.289 20:27:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.289 20:27:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.289 20:27:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:22.289 20:27:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.289 20:27:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.289 20:27:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:22.547 20:27:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.547 20:27:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.547 20:27:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:22.547 20:27:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.547 20:27:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.547 20:27:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:22.547 20:27:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.547 20:27:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.547 20:27:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:22.547 20:27:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.547 20:27:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.805 20:27:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:22.805 20:27:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.805 20:27:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.805 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.805 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.805 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:23.064 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.064 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.323 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:23.323 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:23.323 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:28:23.323 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:28:23.323 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:23.323 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:28:23.323 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:23.323 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:28:23.323 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:23.323 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:23.323 rmmod nvme_tcp 00:28:23.323 rmmod nvme_fabrics 00:28:23.323 rmmod nvme_keyring 00:28:23.323 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:23.323 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:28:23.323 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:28:23.323 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 62348 ']' 00:28:23.323 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 62348 00:28:23.323 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 62348 ']' 00:28:23.323 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 62348 00:28:23.323 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:28:23.323 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:23.323 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62348 00:28:23.323 killing process with pid 62348 00:28:23.323 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:23.323 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:23.323 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62348' 00:28:23.323 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 62348 00:28:23.323 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 62348 00:28:23.581 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:23.581 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:23.581 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:23.581 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:28:23.581 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:23.581 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:28:23.581 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:28:23.581 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:23.581 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:23.581 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:23.581 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:23.581 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:23.581 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:23.581 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:23.581 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:23.581 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:23.581 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:23.581 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:23.581 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:23.581 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:23.840 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:23.840 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:23.840 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:23.840 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.840 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:23.840 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:23.840 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 00:28:23.840 00:28:23.840 real 0m47.739s 00:28:23.840 user 4m3.668s 00:28:23.840 sys 0m14.187s 00:28:23.840 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:23.840 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:23.840 ************************************ 00:28:23.840 END TEST nvmf_ns_hotplug_stress 00:28:23.840 ************************************ 00:28:23.840 20:27:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:28:23.840 20:27:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:23.840 20:27:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:23.840 20:27:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:28:23.840 ************************************ 00:28:23.840 START TEST nvmf_delete_subsystem 00:28:23.840 ************************************ 00:28:23.840 20:27:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:28:23.840 * Looking for test storage... 00:28:23.840 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:23.840 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:23.840 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:28:23.840 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:24.099 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:24.099 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:24.099 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:24.099 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:24.099 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:28:24.099 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:28:24.099 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:28:24.099 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:28:24.099 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:28:24.099 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:28:24.099 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:28:24.099 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:24.099 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:28:24.099 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:28:24.099 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:24.099 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:24.099 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:28:24.099 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:28:24.099 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:24.099 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:28:24.099 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:28:24.099 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:28:24.099 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:28:24.099 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:24.099 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:28:24.099 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:28:24.099 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:24.099 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:24.099 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:28:24.099 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:24.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.100 --rc genhtml_branch_coverage=1 00:28:24.100 --rc genhtml_function_coverage=1 00:28:24.100 --rc genhtml_legend=1 00:28:24.100 --rc geninfo_all_blocks=1 00:28:24.100 --rc geninfo_unexecuted_blocks=1 00:28:24.100 00:28:24.100 ' 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:24.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.100 --rc genhtml_branch_coverage=1 00:28:24.100 --rc genhtml_function_coverage=1 00:28:24.100 --rc genhtml_legend=1 00:28:24.100 --rc geninfo_all_blocks=1 00:28:24.100 --rc geninfo_unexecuted_blocks=1 00:28:24.100 00:28:24.100 ' 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:24.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.100 --rc genhtml_branch_coverage=1 00:28:24.100 --rc genhtml_function_coverage=1 00:28:24.100 --rc genhtml_legend=1 00:28:24.100 --rc geninfo_all_blocks=1 00:28:24.100 --rc geninfo_unexecuted_blocks=1 00:28:24.100 00:28:24.100 ' 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:24.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.100 --rc genhtml_branch_coverage=1 00:28:24.100 --rc genhtml_function_coverage=1 00:28:24.100 --rc genhtml_legend=1 00:28:24.100 --rc geninfo_all_blocks=1 00:28:24.100 --rc geninfo_unexecuted_blocks=1 00:28:24.100 00:28:24.100 ' 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:24.100 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # nvmf_veth_init 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:24.100 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:24.101 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:24.101 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:24.101 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:24.101 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:24.101 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:24.101 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:24.101 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:24.101 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:24.101 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:24.101 Cannot find device "nvmf_init_br" 00:28:24.101 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:28:24.101 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:24.101 Cannot find device "nvmf_init_br2" 00:28:24.101 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:28:24.101 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:24.101 Cannot find device "nvmf_tgt_br" 00:28:24.101 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 00:28:24.101 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:24.101 Cannot find device "nvmf_tgt_br2" 00:28:24.101 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 00:28:24.101 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:24.101 Cannot find device "nvmf_init_br" 00:28:24.101 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 00:28:24.101 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:24.101 Cannot find device "nvmf_init_br2" 00:28:24.101 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 00:28:24.101 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:24.101 Cannot find device "nvmf_tgt_br" 00:28:24.101 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 00:28:24.101 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:24.101 Cannot find device "nvmf_tgt_br2" 00:28:24.101 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 00:28:24.101 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:24.101 Cannot find device "nvmf_br" 00:28:24.101 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 00:28:24.101 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:24.101 Cannot find device "nvmf_init_if" 00:28:24.101 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 00:28:24.101 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:24.101 Cannot find device "nvmf_init_if2" 00:28:24.101 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 00:28:24.101 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:24.101 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:24.101 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 00:28:24.101 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:24.101 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:24.101 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 00:28:24.101 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:24.359 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:24.359 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:24.359 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:24.359 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:24.359 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:24.359 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:24.359 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:24.359 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:24.359 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:24.359 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:24.359 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:24.359 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:24.359 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:24.359 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:24.359 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:24.359 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:24.359 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:24.359 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:24.359 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:24.359 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:24.359 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:24.359 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:24.359 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:24.359 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:24.359 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:24.359 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:24.359 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:24.359 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:24.359 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:24.359 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:24.359 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:24.359 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:24.359 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:24.359 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:28:24.359 00:28:24.359 --- 10.0.0.3 ping statistics --- 00:28:24.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.359 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:28:24.360 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:24.360 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:24.360 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:28:24.360 00:28:24.360 --- 10.0.0.4 ping statistics --- 00:28:24.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.360 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:28:24.360 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:24.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:24.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:28:24.360 00:28:24.360 --- 10.0.0.1 ping statistics --- 00:28:24.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.360 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:28:24.360 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:24.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:24.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:28:24.360 00:28:24.360 --- 10.0.0.2 ping statistics --- 00:28:24.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.360 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:28:24.360 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:24.360 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # return 0 00:28:24.360 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:24.360 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:24.360 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:24.360 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:24.360 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:24.360 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:24.360 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:24.617 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:28:24.617 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:24.617 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:24.617 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:24.617 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=64921 00:28:24.617 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 64921 00:28:24.617 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:28:24.618 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 64921 ']' 00:28:24.618 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:24.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:24.618 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:24.618 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:24.618 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:24.618 20:27:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:24.618 [2024-10-01 20:27:19.692872] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:28:24.618 [2024-10-01 20:27:19.693001] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:24.618 [2024-10-01 20:27:19.836518] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:24.875 [2024-10-01 20:27:19.905183] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:24.875 [2024-10-01 20:27:19.905237] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:24.875 [2024-10-01 20:27:19.905250] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:24.875 [2024-10-01 20:27:19.905260] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:24.875 [2024-10-01 20:27:19.905269] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:24.875 [2024-10-01 20:27:19.905435] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.875 [2024-10-01 20:27:19.905450] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.875 20:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:24.875 20:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:28:24.875 20:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:24.875 20:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:24.875 20:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:24.875 20:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:24.875 20:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:24.875 20:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.875 20:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:24.875 [2024-10-01 20:27:20.092988] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:24.875 20:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.875 20:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:24.875 20:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.875 20:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:24.875 20:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.875 20:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:24.875 20:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.875 20:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:24.875 [2024-10-01 20:27:20.113139] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:24.875 20:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.875 20:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:28:24.875 20:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.875 20:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:24.875 NULL1 00:28:24.875 20:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.875 20:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:24.875 20:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.875 20:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:25.132 Delay0 00:28:25.132 20:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.132 20:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:25.132 20:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.132 20:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:25.132 20:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.132 20:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=64958 00:28:25.132 20:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:28:25.132 20:27:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:25.132 [2024-10-01 20:27:20.304091] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:27.030 20:27:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:27.030 20:27:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.030 20:27:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Write completed with error (sct=0, sc=8) 00:28:27.290 starting I/O failed: -6 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 starting I/O failed: -6 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 starting I/O failed: -6 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Write completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Write completed with error (sct=0, sc=8) 00:28:27.290 starting I/O failed: -6 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Write completed with error (sct=0, sc=8) 00:28:27.290 Write completed with error (sct=0, sc=8) 00:28:27.290 starting I/O failed: -6 00:28:27.290 Write completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 starting I/O failed: -6 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Write completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 starting I/O failed: -6 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Write completed with error (sct=0, sc=8) 00:28:27.290 starting I/O failed: -6 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Write completed with error (sct=0, sc=8) 00:28:27.290 starting I/O failed: -6 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Write completed with error (sct=0, sc=8) 00:28:27.290 Write completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 starting I/O failed: -6 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Write completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 starting I/O failed: -6 00:28:27.290 Write completed with error (sct=0, sc=8) 00:28:27.290 Write completed with error (sct=0, sc=8) 00:28:27.290 Write completed with error (sct=0, sc=8) 00:28:27.290 starting I/O failed: -6 00:28:27.290 Write completed with error (sct=0, sc=8) 00:28:27.290 Write completed with error (sct=0, sc=8) 00:28:27.290 Write completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 starting I/O failed: -6 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Write completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 starting I/O failed: -6 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Write completed with error (sct=0, sc=8) 00:28:27.290 Write completed with error (sct=0, sc=8) 00:28:27.290 Write completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Write completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Write completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Write completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Write completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Write completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Write completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Write completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Write completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Write completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Write completed with error (sct=0, sc=8) 00:28:27.290 Write completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Write completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Write completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Write completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 starting I/O failed: -6 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 Write completed with error (sct=0, sc=8) 00:28:27.290 Write completed with error (sct=0, sc=8) 00:28:27.290 Read completed with error (sct=0, sc=8) 00:28:27.290 starting I/O failed: -6 00:28:27.290 [2024-10-01 20:27:22.341676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142f8b0 is same with the state(6) to be set 00:28:27.291 Write completed with error (sct=0, sc=8) 00:28:27.291 Write completed with error (sct=0, sc=8) 00:28:27.291 Write completed with error (sct=0, sc=8) 00:28:27.291 Write completed with error (sct=0, sc=8) 00:28:27.291 starting I/O failed: -6 00:28:27.291 Write completed with error (sct=0, sc=8) 00:28:27.291 Write completed with error (sct=0, sc=8) 00:28:27.291 Read completed with error (sct=0, sc=8) 00:28:27.291 Read completed with error (sct=0, sc=8) 00:28:27.291 starting I/O failed: -6 00:28:27.291 Write completed with error (sct=0, sc=8) 00:28:27.291 Read completed with error (sct=0, sc=8) 00:28:27.291 Read completed with error (sct=0, sc=8) 00:28:27.291 Write completed with error (sct=0, sc=8) 00:28:27.291 starting I/O failed: -6 00:28:27.291 Write completed with error (sct=0, sc=8) 00:28:27.291 Write completed with error (sct=0, sc=8) 00:28:27.291 Write completed with error (sct=0, sc=8) 00:28:27.291 Write completed with error (sct=0, sc=8) 00:28:27.291 starting I/O failed: -6 00:28:27.291 Read completed with error (sct=0, sc=8) 00:28:27.291 Write completed with error (sct=0, sc=8) 00:28:27.291 Write completed with error (sct=0, sc=8) 00:28:27.291 Read completed with error (sct=0, sc=8) 00:28:27.291 starting I/O failed: -6 00:28:27.291 Read completed with error (sct=0, sc=8) 00:28:27.291 Read completed with error (sct=0, sc=8) 00:28:27.291 starting I/O failed: -6 00:28:27.291 Write completed with error (sct=0, sc=8) 00:28:27.291 Write completed with error (sct=0, sc=8) 00:28:27.291 starting I/O failed: -6 00:28:27.291 Read completed with error (sct=0, sc=8) 00:28:27.291 Write completed with error (sct=0, sc=8) 00:28:27.291 starting I/O failed: -6 00:28:27.291 Read completed with error (sct=0, sc=8) 00:28:27.291 Read completed with error (sct=0, sc=8) 00:28:27.291 starting I/O failed: -6 00:28:27.291 Write completed with error (sct=0, sc=8) 00:28:27.291 Read completed with error (sct=0, sc=8) 00:28:27.291 starting I/O failed: -6 00:28:27.291 Read completed with error (sct=0, sc=8) 00:28:27.291 Read completed with error (sct=0, sc=8) 00:28:27.291 starting I/O failed: -6 00:28:27.291 Read completed with error (sct=0, sc=8) 00:28:27.291 Read completed with error (sct=0, sc=8) 00:28:27.291 starting I/O failed: -6 00:28:27.291 Read completed with error (sct=0, sc=8) 00:28:27.291 Read completed with error (sct=0, sc=8) 00:28:27.291 starting I/O failed: -6 00:28:27.291 Write completed with error (sct=0, sc=8) 00:28:27.291 Read completed with error (sct=0, sc=8) 00:28:27.291 starting I/O failed: -6 00:28:27.291 Read completed with error (sct=0, sc=8) 00:28:27.291 Read completed with error (sct=0, sc=8) 00:28:27.291 starting I/O failed: -6 00:28:27.291 Write completed with error (sct=0, sc=8) 00:28:27.291 Read completed with error (sct=0, sc=8) 00:28:27.291 starting I/O failed: -6 00:28:27.291 Read completed with error (sct=0, sc=8) 00:28:27.291 Write completed with error (sct=0, sc=8) 00:28:27.291 starting I/O failed: -6 00:28:27.291 Read completed with error (sct=0, sc=8) 00:28:27.291 Write completed with error (sct=0, sc=8) 00:28:27.291 starting I/O failed: -6 00:28:27.291 Write completed with error (sct=0, sc=8) 00:28:27.291 Write completed with error (sct=0, sc=8) 00:28:27.291 starting I/O failed: -6 00:28:27.291 Write completed with error (sct=0, sc=8) 00:28:27.291 Write completed with error (sct=0, sc=8) 00:28:27.291 starting I/O failed: -6 00:28:27.291 Read completed with error (sct=0, sc=8) 00:28:27.291 Write completed with error (sct=0, sc=8) 00:28:27.291 starting I/O failed: -6 00:28:27.291 Read completed with error (sct=0, sc=8) 00:28:27.291 Read completed with error (sct=0, sc=8) 00:28:27.291 starting I/O failed: -6 00:28:27.291 Write completed with error (sct=0, sc=8) 00:28:27.291 Read completed with error (sct=0, sc=8) 00:28:27.291 starting I/O failed: -6 00:28:27.291 Read completed with error (sct=0, sc=8) 00:28:27.291 Read completed with error (sct=0, sc=8) 00:28:27.291 starting I/O failed: -6 00:28:27.291 Read completed with error (sct=0, sc=8) 00:28:27.291 Read completed with error (sct=0, sc=8) 00:28:27.291 starting I/O failed: -6 00:28:27.291 Read completed with error (sct=0, sc=8) 00:28:27.291 Write completed with error (sct=0, sc=8) 00:28:27.291 starting I/O failed: -6 00:28:27.291 Write completed with error (sct=0, sc=8) 00:28:27.291 Write completed with error (sct=0, sc=8) 00:28:27.291 starting I/O failed: -6 00:28:27.291 Read completed with error (sct=0, sc=8) 00:28:27.291 Write completed with error (sct=0, sc=8) 00:28:27.291 starting I/O failed: -6 00:28:27.291 Read completed with error (sct=0, sc=8) 00:28:27.291 Read completed with error (sct=0, sc=8) 00:28:27.291 starting I/O failed: -6 00:28:27.291 Read completed with error (sct=0, sc=8) 00:28:27.291 starting I/O failed: -6 00:28:27.291 starting I/O failed: -6 00:28:27.291 starting I/O failed: -6 00:28:27.291 starting I/O failed: -6 00:28:27.291 starting I/O failed: -6 00:28:27.291 starting I/O failed: -6 00:28:27.291 starting I/O failed: -6 00:28:27.291 starting I/O failed: -6 00:28:27.291 starting I/O failed: -6 00:28:27.291 starting I/O failed: -6 00:28:27.291 starting I/O failed: -6 00:28:27.291 starting I/O failed: -6 00:28:28.226 [2024-10-01 20:27:23.323619] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142afb0 is same with the state(6) to be set 00:28:28.226 Read completed with error (sct=0, sc=8) 00:28:28.226 Write completed with error (sct=0, sc=8) 00:28:28.226 Read completed with error (sct=0, sc=8) 00:28:28.226 Read completed with error (sct=0, sc=8) 00:28:28.226 Read completed with error (sct=0, sc=8) 00:28:28.226 Read completed with error (sct=0, sc=8) 00:28:28.226 Read completed with error (sct=0, sc=8) 00:28:28.226 Read completed with error (sct=0, sc=8) 00:28:28.226 Write completed with error (sct=0, sc=8) 00:28:28.226 Read completed with error (sct=0, sc=8) 00:28:28.226 Read completed with error (sct=0, sc=8) 00:28:28.226 Read completed with error (sct=0, sc=8) 00:28:28.226 Read completed with error (sct=0, sc=8) 00:28:28.226 Write completed with error (sct=0, sc=8) 00:28:28.226 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Write completed with error (sct=0, sc=8) 00:28:28.227 Write completed with error (sct=0, sc=8) 00:28:28.227 Write completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Write completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Write completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 [2024-10-01 20:27:23.339991] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0e4c00d640 is same with the state(6) to be set 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Write completed with error (sct=0, sc=8) 00:28:28.227 Write completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Write completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Write completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Write completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Write completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Write completed with error (sct=0, sc=8) 00:28:28.227 Write completed with error (sct=0, sc=8) 00:28:28.227 Write completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Write completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Write completed with error (sct=0, sc=8) 00:28:28.227 Write completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Write completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 [2024-10-01 20:27:23.341376] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0e4c00cfe0 is same with the state(6) to be set 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Write completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Write completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Write completed with error (sct=0, sc=8) 00:28:28.227 Write completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Write completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Write completed with error (sct=0, sc=8) 00:28:28.227 Write completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Write completed with error (sct=0, sc=8) 00:28:28.227 Write completed with error (sct=0, sc=8) 00:28:28.227 [2024-10-01 20:27:23.341722] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ebe0 is same with the state(6) to be set 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Write completed with error (sct=0, sc=8) 00:28:28.227 Write completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Write completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Write completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Write completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Write completed with error (sct=0, sc=8) 00:28:28.227 Write completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Write completed with error (sct=0, sc=8) 00:28:28.227 Write completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Write completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 Read completed with error (sct=0, sc=8) 00:28:28.227 [2024-10-01 20:27:23.342304] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142eb20 is same with the state(6) to be set 00:28:28.227 Initializing NVMe Controllers 00:28:28.227 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:28:28.227 Controller IO queue size 128, less than required. 00:28:28.227 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:28.227 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:28.227 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:28.227 Initialization complete. Launching workers. 00:28:28.227 ======================================================== 00:28:28.227 Latency(us) 00:28:28.227 Device Information : IOPS MiB/s Average min max 00:28:28.227 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 170.32 0.08 906616.42 536.40 2004217.96 00:28:28.227 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 172.30 0.08 979505.93 1214.95 2002409.48 00:28:28.227 ======================================================== 00:28:28.227 Total : 342.62 0.17 943272.45 536.40 2004217.96 00:28:28.227 00:28:28.227 [2024-10-01 20:27:23.342912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x142afb0 (9): Bad file descriptor 00:28:28.227 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:28.227 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.227 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:28:28.227 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 64958 00:28:28.227 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:28:28.793 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:28:28.793 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 64958 00:28:28.793 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (64958) - No such process 00:28:28.793 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 64958 00:28:28.793 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:28:28.793 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 64958 00:28:28.793 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:28:28.793 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:28.793 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:28:28.793 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:28.793 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 64958 00:28:28.793 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:28:28.793 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:28.793 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:28.793 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:28.793 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:28.793 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.793 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:28.793 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.793 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:28.793 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.793 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:28.793 [2024-10-01 20:27:23.863333] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:28.793 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.793 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:28.793 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.793 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:28.793 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.793 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=65004 00:28:28.793 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:28:28.793 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65004 00:28:28.793 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:28.793 20:27:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:29.051 [2024-10-01 20:27:24.046819] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:29.308 20:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:29.308 20:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65004 00:28:29.308 20:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:29.874 20:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:29.874 20:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65004 00:28:29.874 20:27:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:30.441 20:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:30.441 20:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65004 00:28:30.441 20:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:30.700 20:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:30.700 20:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65004 00:28:30.700 20:27:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:31.266 20:27:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:31.266 20:27:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65004 00:28:31.266 20:27:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:31.833 20:27:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:31.833 20:27:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65004 00:28:31.833 20:27:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:32.091 Initializing NVMe Controllers 00:28:32.091 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:28:32.091 Controller IO queue size 128, less than required. 00:28:32.092 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:32.092 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:32.092 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:32.092 Initialization complete. Launching workers. 00:28:32.092 ======================================================== 00:28:32.092 Latency(us) 00:28:32.092 Device Information : IOPS MiB/s Average min max 00:28:32.092 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003881.83 1000159.97 1041759.31 00:28:32.092 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005191.54 1000175.13 1041737.90 00:28:32.092 ======================================================== 00:28:32.092 Total : 256.00 0.12 1004536.69 1000159.97 1041759.31 00:28:32.092 00:28:32.351 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:32.351 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65004 00:28:32.351 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (65004) - No such process 00:28:32.352 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 65004 00:28:32.352 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:32.352 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:28:32.352 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:32.352 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:28:32.352 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:32.352 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:28:32.352 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:32.352 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:32.352 rmmod nvme_tcp 00:28:32.352 rmmod nvme_fabrics 00:28:32.352 rmmod nvme_keyring 00:28:32.352 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:32.352 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:28:32.352 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:28:32.352 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 64921 ']' 00:28:32.352 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 64921 00:28:32.352 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 64921 ']' 00:28:32.352 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 64921 00:28:32.352 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:28:32.352 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:32.352 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64921 00:28:32.352 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:32.352 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:32.352 killing process with pid 64921 00:28:32.352 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64921' 00:28:32.352 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 64921 00:28:32.352 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 64921 00:28:32.611 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:32.611 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:32.611 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:32.611 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:28:32.611 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:32.611 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:28:32.611 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:28:32.611 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:32.611 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:32.611 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:32.611 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:32.611 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:32.611 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:32.611 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:32.611 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:32.611 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:32.611 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:32.611 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:32.870 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:32.870 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:32.870 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:32.870 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:32.871 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:32.871 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:32.871 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:32.871 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:32.871 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 00:28:32.871 00:28:32.871 real 0m9.023s 00:28:32.871 user 0m27.475s 00:28:32.871 sys 0m1.540s 00:28:32.871 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:32.871 ************************************ 00:28:32.871 END TEST nvmf_delete_subsystem 00:28:32.871 ************************************ 00:28:32.871 20:27:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:32.871 20:27:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:28:32.871 20:27:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:32.871 20:27:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:32.871 20:27:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:28:32.871 ************************************ 00:28:32.871 START TEST nvmf_host_management 00:28:32.871 ************************************ 00:28:32.871 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:28:32.871 * Looking for test storage... 00:28:32.871 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:32.871 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:32.871 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:28:32.871 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:33.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.131 --rc genhtml_branch_coverage=1 00:28:33.131 --rc genhtml_function_coverage=1 00:28:33.131 --rc genhtml_legend=1 00:28:33.131 --rc geninfo_all_blocks=1 00:28:33.131 --rc geninfo_unexecuted_blocks=1 00:28:33.131 00:28:33.131 ' 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:33.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.131 --rc genhtml_branch_coverage=1 00:28:33.131 --rc genhtml_function_coverage=1 00:28:33.131 --rc genhtml_legend=1 00:28:33.131 --rc geninfo_all_blocks=1 00:28:33.131 --rc geninfo_unexecuted_blocks=1 00:28:33.131 00:28:33.131 ' 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:33.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.131 --rc genhtml_branch_coverage=1 00:28:33.131 --rc genhtml_function_coverage=1 00:28:33.131 --rc genhtml_legend=1 00:28:33.131 --rc geninfo_all_blocks=1 00:28:33.131 --rc geninfo_unexecuted_blocks=1 00:28:33.131 00:28:33.131 ' 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:33.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.131 --rc genhtml_branch_coverage=1 00:28:33.131 --rc genhtml_function_coverage=1 00:28:33.131 --rc genhtml_legend=1 00:28:33.131 --rc geninfo_all_blocks=1 00:28:33.131 --rc geninfo_unexecuted_blocks=1 00:28:33.131 00:28:33.131 ' 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:33.131 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:33.132 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@458 -- # nvmf_veth_init 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:33.132 Cannot find device "nvmf_init_br" 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:33.132 Cannot find device "nvmf_init_br2" 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:33.132 Cannot find device "nvmf_tgt_br" 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:33.132 Cannot find device "nvmf_tgt_br2" 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:33.132 Cannot find device "nvmf_init_br" 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:33.132 Cannot find device "nvmf_init_br2" 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:33.132 Cannot find device "nvmf_tgt_br" 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:33.132 Cannot find device "nvmf_tgt_br2" 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:33.132 Cannot find device "nvmf_br" 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:33.132 Cannot find device "nvmf_init_if" 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:33.132 Cannot find device "nvmf_init_if2" 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:33.132 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:33.132 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:33.132 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:33.391 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:33.391 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:33.391 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:33.391 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:33.391 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:33.391 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:33.391 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:33.391 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:33.391 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:33.391 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:33.391 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:33.391 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:33.391 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:33.391 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:33.391 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:33.391 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:33.391 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:33.391 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:33.391 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:33.391 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:33.391 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:33.391 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:33.391 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:33.391 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:33.391 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:33.391 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:33.391 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:33.391 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:33.391 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:33.391 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:33.391 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:28:33.391 00:28:33.391 --- 10.0.0.3 ping statistics --- 00:28:33.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.392 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:28:33.392 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:33.392 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:33.392 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:28:33.392 00:28:33.392 --- 10.0.0.4 ping statistics --- 00:28:33.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.392 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:28:33.392 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:33.392 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:33.392 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:28:33.392 00:28:33.392 --- 10.0.0.1 ping statistics --- 00:28:33.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.392 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:28:33.392 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:33.392 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:33.392 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:28:33.392 00:28:33.392 --- 10.0.0.2 ping statistics --- 00:28:33.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.392 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:28:33.392 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:33.392 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # return 0 00:28:33.392 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:33.392 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:33.392 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:33.392 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:33.392 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:33.392 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:33.392 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:33.392 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:28:33.392 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:28:33.392 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:28:33.392 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:33.392 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:33.392 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:33.392 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=65294 00:28:33.392 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 65294 00:28:33.392 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:33.392 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 65294 ']' 00:28:33.392 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:33.392 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:33.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:33.392 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:33.392 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:33.392 20:27:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:33.651 [2024-10-01 20:27:28.678467] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:28:33.651 [2024-10-01 20:27:28.678573] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:33.651 [2024-10-01 20:27:28.821097] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:33.651 [2024-10-01 20:27:28.891161] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:33.651 [2024-10-01 20:27:28.891216] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:33.651 [2024-10-01 20:27:28.891229] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:33.651 [2024-10-01 20:27:28.891240] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:33.651 [2024-10-01 20:27:28.891249] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:33.651 [2024-10-01 20:27:28.891947] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:28:33.651 [2024-10-01 20:27:28.892133] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:28:33.651 [2024-10-01 20:27:28.892245] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:28:33.651 [2024-10-01 20:27:28.892253] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:33.914 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:33.914 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:28:33.914 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:33.914 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:33.914 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:33.914 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:33.914 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:33.914 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.914 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:33.914 [2024-10-01 20:27:29.079777] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:33.914 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.914 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:28:33.914 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:33.914 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:33.914 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:28:33.914 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:28:33.914 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:28:33.914 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.914 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:33.914 Malloc0 00:28:33.914 [2024-10-01 20:27:29.135700] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:33.914 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.914 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:28:33.914 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:33.914 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:34.173 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=65347 00:28:34.173 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 65347 /var/tmp/bdevperf.sock 00:28:34.173 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 65347 ']' 00:28:34.173 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:34.173 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:34.173 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:28:34.173 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:34.173 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:34.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:34.173 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:28:34.173 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:34.173 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:34.173 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:28:34.173 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:34.173 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:34.173 { 00:28:34.173 "params": { 00:28:34.173 "name": "Nvme$subsystem", 00:28:34.173 "trtype": "$TEST_TRANSPORT", 00:28:34.173 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.173 "adrfam": "ipv4", 00:28:34.173 "trsvcid": "$NVMF_PORT", 00:28:34.173 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.173 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.173 "hdgst": ${hdgst:-false}, 00:28:34.173 "ddgst": ${ddgst:-false} 00:28:34.173 }, 00:28:34.173 "method": "bdev_nvme_attach_controller" 00:28:34.173 } 00:28:34.173 EOF 00:28:34.173 )") 00:28:34.173 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:28:34.173 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:28:34.173 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:28:34.173 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:28:34.173 "params": { 00:28:34.173 "name": "Nvme0", 00:28:34.173 "trtype": "tcp", 00:28:34.173 "traddr": "10.0.0.3", 00:28:34.173 "adrfam": "ipv4", 00:28:34.173 "trsvcid": "4420", 00:28:34.173 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:34.173 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:34.173 "hdgst": false, 00:28:34.173 "ddgst": false 00:28:34.173 }, 00:28:34.173 "method": "bdev_nvme_attach_controller" 00:28:34.173 }' 00:28:34.173 [2024-10-01 20:27:29.234194] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:28:34.173 [2024-10-01 20:27:29.234281] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65347 ] 00:28:34.173 [2024-10-01 20:27:29.373607] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.431 [2024-10-01 20:27:29.443532] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.431 Running I/O for 10 seconds... 00:28:34.690 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:34.690 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:28:34.690 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:34.690 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.690 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:34.690 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.690 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:34.690 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:28:34.690 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:34.690 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:28:34.690 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:28:34.690 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:28:34.690 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:28:34.690 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:34.690 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:34.690 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:34.690 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.690 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:34.690 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.690 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:28:34.690 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:28:34.690 20:27:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:28:34.951 20:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:28:34.951 20:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:34.951 20:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:34.951 20:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:34.951 20:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.951 20:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:34.951 20:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.951 20:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:28:34.951 20:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:28:34.951 20:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:28:34.951 20:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:28:34.951 20:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:28:34.951 20:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:34.951 20:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.951 20:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:34.951 [2024-10-01 20:27:30.092826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.951 [2024-10-01 20:27:30.092877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.951 [2024-10-01 20:27:30.092902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.951 [2024-10-01 20:27:30.092913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.951 [2024-10-01 20:27:30.092925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.951 [2024-10-01 20:27:30.092935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.951 [2024-10-01 20:27:30.092946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.951 [2024-10-01 20:27:30.092955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.952 [2024-10-01 20:27:30.092967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.952 [2024-10-01 20:27:30.092976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.952 [2024-10-01 20:27:30.092989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.952 [2024-10-01 20:27:30.092998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.952 [2024-10-01 20:27:30.093010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.952 [2024-10-01 20:27:30.093019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.952 [2024-10-01 20:27:30.093030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.952 [2024-10-01 20:27:30.093039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.952 [2024-10-01 20:27:30.093050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.952 [2024-10-01 20:27:30.093060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.952 [2024-10-01 20:27:30.093071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.952 [2024-10-01 20:27:30.093080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.952 [2024-10-01 20:27:30.093093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.952 [2024-10-01 20:27:30.093102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.952 [2024-10-01 20:27:30.093141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.952 [2024-10-01 20:27:30.093152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.952 [2024-10-01 20:27:30.093163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.952 [2024-10-01 20:27:30.093173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.952 [2024-10-01 20:27:30.093185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.952 [2024-10-01 20:27:30.093194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.952 [2024-10-01 20:27:30.093205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.952 [2024-10-01 20:27:30.093215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.952 [2024-10-01 20:27:30.093226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.952 [2024-10-01 20:27:30.093236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.952 [2024-10-01 20:27:30.093247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.952 [2024-10-01 20:27:30.093256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.952 [2024-10-01 20:27:30.093267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.952 [2024-10-01 20:27:30.093276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.952 [2024-10-01 20:27:30.093289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.952 [2024-10-01 20:27:30.093306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.952 [2024-10-01 20:27:30.093323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.952 [2024-10-01 20:27:30.093337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.952 [2024-10-01 20:27:30.093354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.952 [2024-10-01 20:27:30.093364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.952 [2024-10-01 20:27:30.093376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.952 [2024-10-01 20:27:30.093385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.952 [2024-10-01 20:27:30.093402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.952 [2024-10-01 20:27:30.093411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.952 [2024-10-01 20:27:30.093423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.952 [2024-10-01 20:27:30.093432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.952 [2024-10-01 20:27:30.093444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.952 [2024-10-01 20:27:30.093453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.952 [2024-10-01 20:27:30.093465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.952 [2024-10-01 20:27:30.093474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.952 [2024-10-01 20:27:30.093486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.952 [2024-10-01 20:27:30.093495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.952 [2024-10-01 20:27:30.093507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.952 [2024-10-01 20:27:30.093516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.952 [2024-10-01 20:27:30.093528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.952 [2024-10-01 20:27:30.093537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.952 [2024-10-01 20:27:30.093549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.952 [2024-10-01 20:27:30.093558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.952 [2024-10-01 20:27:30.093570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.952 [2024-10-01 20:27:30.093579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.952 [2024-10-01 20:27:30.093591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.952 [2024-10-01 20:27:30.093600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.952 [2024-10-01 20:27:30.093611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.952 [2024-10-01 20:27:30.093620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.952 [2024-10-01 20:27:30.093631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.952 [2024-10-01 20:27:30.093640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.952 [2024-10-01 20:27:30.093652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.952 [2024-10-01 20:27:30.093661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.952 [2024-10-01 20:27:30.093673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.952 [2024-10-01 20:27:30.093682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.952 [2024-10-01 20:27:30.093693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.952 [2024-10-01 20:27:30.093702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.952 [2024-10-01 20:27:30.093714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.952 [2024-10-01 20:27:30.093723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.952 [2024-10-01 20:27:30.093736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.952 [2024-10-01 20:27:30.093746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.952 [2024-10-01 20:27:30.093757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.953 [2024-10-01 20:27:30.093766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.953 [2024-10-01 20:27:30.093778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.953 [2024-10-01 20:27:30.093788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.953 [2024-10-01 20:27:30.093799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.953 [2024-10-01 20:27:30.093808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.953 [2024-10-01 20:27:30.093819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.953 [2024-10-01 20:27:30.093829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.953 [2024-10-01 20:27:30.093840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.953 [2024-10-01 20:27:30.093849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.953 [2024-10-01 20:27:30.093861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.953 [2024-10-01 20:27:30.093870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.953 [2024-10-01 20:27:30.093881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.953 [2024-10-01 20:27:30.093891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.953 [2024-10-01 20:27:30.093902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.953 [2024-10-01 20:27:30.093911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.953 [2024-10-01 20:27:30.093924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.953 [2024-10-01 20:27:30.093933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.953 [2024-10-01 20:27:30.093944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.953 [2024-10-01 20:27:30.093954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.953 [2024-10-01 20:27:30.093965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.953 [2024-10-01 20:27:30.093975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.953 [2024-10-01 20:27:30.093987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.953 [2024-10-01 20:27:30.093996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.953 [2024-10-01 20:27:30.094007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.953 [2024-10-01 20:27:30.094017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.953 [2024-10-01 20:27:30.094028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.953 [2024-10-01 20:27:30.094037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.953 [2024-10-01 20:27:30.094048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.953 [2024-10-01 20:27:30.094058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.953 [2024-10-01 20:27:30.094071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.953 [2024-10-01 20:27:30.094080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.953 [2024-10-01 20:27:30.094092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.953 [2024-10-01 20:27:30.094101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.953 [2024-10-01 20:27:30.094125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.953 [2024-10-01 20:27:30.094147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.953 [2024-10-01 20:27:30.094160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.953 [2024-10-01 20:27:30.094170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.953 [2024-10-01 20:27:30.094181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.953 [2024-10-01 20:27:30.094190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.953 [2024-10-01 20:27:30.094201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.953 [2024-10-01 20:27:30.094211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.953 [2024-10-01 20:27:30.094222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.953 [2024-10-01 20:27:30.094231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.953 [2024-10-01 20:27:30.094242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.953 [2024-10-01 20:27:30.094251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.953 [2024-10-01 20:27:30.094262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.953 [2024-10-01 20:27:30.094271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.953 [2024-10-01 20:27:30.094283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.953 [2024-10-01 20:27:30.094292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:34.953 [2024-10-01 20:27:30.094364] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xaddd50 was disconnected and freed. reset controller. 00:28:34.953 [2024-10-01 20:27:30.095561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:34.953 task offset: 87936 on job bdev=Nvme0n1 fails 00:28:34.953 00:28:34.953 Latency(us) 00:28:34.953 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:34.953 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:34.953 Job: Nvme0n1 ended in about 0.45 seconds with error 00:28:34.953 Verification LBA range: start 0x0 length 0x400 00:28:34.953 Nvme0n1 : 0.45 1409.98 88.12 141.00 0.00 39857.19 2159.71 38606.66 00:28:34.953 =================================================================================================================== 00:28:34.953 Total : 1409.98 88.12 141.00 0.00 39857.19 2159.71 38606.66 00:28:34.953 20:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.953 20:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:34.953 [2024-10-01 20:27:30.097829] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:34.954 [2024-10-01 20:27:30.097863] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad88d0 (9): Bad file descriptor 00:28:34.954 20:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.954 20:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:34.954 20:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.954 20:27:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:28:34.954 [2024-10-01 20:27:30.108759] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:35.886 20:27:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 65347 00:28:35.886 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (65347) - No such process 00:28:35.886 20:27:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:28:35.886 20:27:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:28:35.886 20:27:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:35.887 20:27:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:28:35.887 20:27:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:28:35.887 20:27:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:28:35.887 20:27:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:35.887 20:27:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:35.887 { 00:28:35.887 "params": { 00:28:35.887 "name": "Nvme$subsystem", 00:28:35.887 "trtype": "$TEST_TRANSPORT", 00:28:35.887 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:35.887 "adrfam": "ipv4", 00:28:35.887 "trsvcid": "$NVMF_PORT", 00:28:35.887 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:35.887 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:35.887 "hdgst": ${hdgst:-false}, 00:28:35.887 "ddgst": ${ddgst:-false} 00:28:35.887 }, 00:28:35.887 "method": "bdev_nvme_attach_controller" 00:28:35.887 } 00:28:35.887 EOF 00:28:35.887 )") 00:28:35.887 20:27:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:28:35.887 20:27:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:28:35.887 20:27:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:28:35.887 20:27:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:28:35.887 "params": { 00:28:35.887 "name": "Nvme0", 00:28:35.887 "trtype": "tcp", 00:28:35.887 "traddr": "10.0.0.3", 00:28:35.887 "adrfam": "ipv4", 00:28:35.887 "trsvcid": "4420", 00:28:35.887 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:35.887 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:35.887 "hdgst": false, 00:28:35.887 "ddgst": false 00:28:35.887 }, 00:28:35.887 "method": "bdev_nvme_attach_controller" 00:28:35.887 }' 00:28:36.145 [2024-10-01 20:27:31.162044] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:28:36.145 [2024-10-01 20:27:31.162149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65393 ] 00:28:36.145 [2024-10-01 20:27:31.294699] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.145 [2024-10-01 20:27:31.381800] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:36.403 Running I/O for 1 seconds... 00:28:37.784 1471.00 IOPS, 91.94 MiB/s 00:28:37.784 Latency(us) 00:28:37.784 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:37.784 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:37.784 Verification LBA range: start 0x0 length 0x400 00:28:37.784 Nvme0n1 : 1.03 1485.55 92.85 0.00 0.00 42103.75 6821.70 42419.67 00:28:37.784 =================================================================================================================== 00:28:37.784 Total : 1485.55 92.85 0.00 0.00 42103.75 6821.70 42419.67 00:28:37.784 20:27:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:28:37.784 20:27:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:28:37.784 20:27:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:28:37.784 20:27:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:28:37.784 20:27:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:28:37.784 20:27:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:37.784 20:27:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:28:37.784 20:27:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:37.784 20:27:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:28:37.784 20:27:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:37.784 20:27:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:37.784 rmmod nvme_tcp 00:28:37.784 rmmod nvme_fabrics 00:28:37.784 rmmod nvme_keyring 00:28:37.784 20:27:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:37.784 20:27:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:28:37.784 20:27:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:28:37.784 20:27:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 65294 ']' 00:28:37.785 20:27:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 65294 00:28:37.785 20:27:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 65294 ']' 00:28:37.785 20:27:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 65294 00:28:37.785 20:27:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:28:37.785 20:27:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:37.785 20:27:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65294 00:28:37.785 20:27:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:37.785 20:27:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:37.785 killing process with pid 65294 00:28:37.785 20:27:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65294' 00:28:37.785 20:27:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 65294 00:28:37.785 20:27:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 65294 00:28:38.044 [2024-10-01 20:27:33.173121] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:28:38.044 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:38.044 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:38.044 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:38.044 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:28:38.044 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:38.044 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:28:38.044 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:28:38.044 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:38.044 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:38.044 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:38.044 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:38.044 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:38.044 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:38.044 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:38.044 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:38.303 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:38.303 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:38.303 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:38.303 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:38.303 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:38.303 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:38.303 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:38.303 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:38.303 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:38.303 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:38.303 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.303 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:28:38.303 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:28:38.303 00:28:38.303 real 0m5.426s 00:28:38.303 user 0m19.579s 00:28:38.303 sys 0m1.369s 00:28:38.303 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:38.303 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:38.303 ************************************ 00:28:38.303 END TEST nvmf_host_management 00:28:38.303 ************************************ 00:28:38.303 20:27:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:28:38.303 20:27:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:38.303 20:27:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:38.303 20:27:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:28:38.303 ************************************ 00:28:38.303 START TEST nvmf_lvol 00:28:38.303 ************************************ 00:28:38.303 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:28:38.563 * Looking for test storage... 00:28:38.563 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:38.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:38.563 --rc genhtml_branch_coverage=1 00:28:38.563 --rc genhtml_function_coverage=1 00:28:38.563 --rc genhtml_legend=1 00:28:38.563 --rc geninfo_all_blocks=1 00:28:38.563 --rc geninfo_unexecuted_blocks=1 00:28:38.563 00:28:38.563 ' 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:38.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:38.563 --rc genhtml_branch_coverage=1 00:28:38.563 --rc genhtml_function_coverage=1 00:28:38.563 --rc genhtml_legend=1 00:28:38.563 --rc geninfo_all_blocks=1 00:28:38.563 --rc geninfo_unexecuted_blocks=1 00:28:38.563 00:28:38.563 ' 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:38.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:38.563 --rc genhtml_branch_coverage=1 00:28:38.563 --rc genhtml_function_coverage=1 00:28:38.563 --rc genhtml_legend=1 00:28:38.563 --rc geninfo_all_blocks=1 00:28:38.563 --rc geninfo_unexecuted_blocks=1 00:28:38.563 00:28:38.563 ' 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:38.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:38.563 --rc genhtml_branch_coverage=1 00:28:38.563 --rc genhtml_function_coverage=1 00:28:38.563 --rc genhtml_legend=1 00:28:38.563 --rc geninfo_all_blocks=1 00:28:38.563 --rc geninfo_unexecuted_blocks=1 00:28:38.563 00:28:38.563 ' 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:28:38.563 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:38.564 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@458 -- # nvmf_veth_init 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:38.564 Cannot find device "nvmf_init_br" 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:38.564 Cannot find device "nvmf_init_br2" 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:28:38.564 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:38.564 Cannot find device "nvmf_tgt_br" 00:28:38.565 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:28:38.565 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:38.565 Cannot find device "nvmf_tgt_br2" 00:28:38.565 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:28:38.565 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:38.565 Cannot find device "nvmf_init_br" 00:28:38.565 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:28:38.565 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:38.565 Cannot find device "nvmf_init_br2" 00:28:38.565 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:28:38.565 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:38.565 Cannot find device "nvmf_tgt_br" 00:28:38.565 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:28:38.565 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:38.824 Cannot find device "nvmf_tgt_br2" 00:28:38.824 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:28:38.824 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:38.824 Cannot find device "nvmf_br" 00:28:38.824 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:28:38.824 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:38.824 Cannot find device "nvmf_init_if" 00:28:38.824 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:28:38.824 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:38.824 Cannot find device "nvmf_init_if2" 00:28:38.824 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:28:38.824 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:38.824 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:38.824 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:28:38.824 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:38.824 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:38.824 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:28:38.824 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:38.824 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:38.824 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:38.824 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:38.824 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:38.824 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:38.824 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:38.824 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:38.824 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:38.824 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:38.824 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:38.824 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:38.824 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:38.824 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:38.824 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:38.824 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:38.824 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:38.824 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:38.824 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:38.824 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:38.824 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:38.824 20:27:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:38.824 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:38.824 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:38.824 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:38.824 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:38.824 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:38.824 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:38.824 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:38.824 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:38.824 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:38.824 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:38.824 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:39.084 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:39.084 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:28:39.084 00:28:39.084 --- 10.0.0.3 ping statistics --- 00:28:39.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.084 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:28:39.084 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:39.084 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:39.084 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:28:39.084 00:28:39.084 --- 10.0.0.4 ping statistics --- 00:28:39.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.084 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:28:39.084 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:39.084 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:39.084 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:28:39.084 00:28:39.084 --- 10.0.0.1 ping statistics --- 00:28:39.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.084 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:28:39.084 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:39.084 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:39.084 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:28:39.084 00:28:39.084 --- 10.0.0.2 ping statistics --- 00:28:39.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.084 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:28:39.084 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:39.084 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # return 0 00:28:39.084 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:39.084 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:39.084 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:39.084 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:39.084 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:39.084 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:39.084 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:39.084 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:28:39.084 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:39.084 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:39.084 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:39.084 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=65659 00:28:39.084 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:28:39.084 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 65659 00:28:39.084 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 65659 ']' 00:28:39.084 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.084 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:39.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.084 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.084 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:39.084 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:39.084 [2024-10-01 20:27:34.198213] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:28:39.084 [2024-10-01 20:27:34.198351] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:39.343 [2024-10-01 20:27:34.339648] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:39.343 [2024-10-01 20:27:34.409549] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:39.343 [2024-10-01 20:27:34.409794] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:39.343 [2024-10-01 20:27:34.409889] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:39.343 [2024-10-01 20:27:34.409980] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:39.343 [2024-10-01 20:27:34.410056] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:39.343 [2024-10-01 20:27:34.410276] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:39.343 [2024-10-01 20:27:34.410318] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:28:39.343 [2024-10-01 20:27:34.410328] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.343 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:39.343 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:28:39.343 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:39.343 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:39.343 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:39.343 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:39.343 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:39.912 [2024-10-01 20:27:34.889418] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:39.912 20:27:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:40.171 20:27:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:28:40.171 20:27:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:40.429 20:27:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:28:40.429 20:27:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:28:40.686 20:27:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:28:40.945 20:27:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=082a967e-c567-4841-adcd-d8a810edbd50 00:28:40.945 20:27:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 082a967e-c567-4841-adcd-d8a810edbd50 lvol 20 00:28:41.203 20:27:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=778625cb-7795-4c6d-a5ca-ff82ead9e635 00:28:41.203 20:27:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:41.461 20:27:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 778625cb-7795-4c6d-a5ca-ff82ead9e635 00:28:42.026 20:27:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:28:42.026 [2024-10-01 20:27:37.277103] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:42.284 20:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:28:42.542 20:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=65799 00:28:42.542 20:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:28:42.542 20:27:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:28:43.474 20:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 778625cb-7795-4c6d-a5ca-ff82ead9e635 MY_SNAPSHOT 00:28:43.781 20:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=b69de194-22aa-42d5-ba83-8f053a0b6d85 00:28:43.781 20:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 778625cb-7795-4c6d-a5ca-ff82ead9e635 30 00:28:44.038 20:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone b69de194-22aa-42d5-ba83-8f053a0b6d85 MY_CLONE 00:28:44.604 20:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=76adde91-9e92-4b27-8d74-69505e27d70d 00:28:44.604 20:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 76adde91-9e92-4b27-8d74-69505e27d70d 00:28:45.170 20:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 65799 00:28:53.283 Initializing NVMe Controllers 00:28:53.283 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:28:53.283 Controller IO queue size 128, less than required. 00:28:53.283 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:53.283 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:28:53.283 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:28:53.283 Initialization complete. Launching workers. 00:28:53.283 ======================================================== 00:28:53.283 Latency(us) 00:28:53.283 Device Information : IOPS MiB/s Average min max 00:28:53.283 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10461.90 40.87 12240.84 1708.60 59585.46 00:28:53.283 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10230.60 39.96 12512.89 3670.90 61007.61 00:28:53.283 ======================================================== 00:28:53.283 Total : 20692.50 80.83 12375.35 1708.60 61007.61 00:28:53.283 00:28:53.283 20:27:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:53.283 20:27:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 778625cb-7795-4c6d-a5ca-ff82ead9e635 00:28:53.283 20:27:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 082a967e-c567-4841-adcd-d8a810edbd50 00:28:53.542 20:27:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:28:53.542 20:27:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:28:53.543 20:27:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:28:53.543 20:27:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:53.543 20:27:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:28:53.543 20:27:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:53.543 20:27:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:28:53.543 20:27:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:53.543 20:27:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:53.543 rmmod nvme_tcp 00:28:53.543 rmmod nvme_fabrics 00:28:53.543 rmmod nvme_keyring 00:28:53.801 20:27:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:53.801 20:27:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:28:53.801 20:27:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:28:53.801 20:27:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 65659 ']' 00:28:53.801 20:27:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 65659 00:28:53.801 20:27:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 65659 ']' 00:28:53.801 20:27:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 65659 00:28:53.801 20:27:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:28:53.801 20:27:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:53.801 20:27:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65659 00:28:53.801 20:27:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:53.801 20:27:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:53.801 killing process with pid 65659 00:28:53.801 20:27:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65659' 00:28:53.801 20:27:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 65659 00:28:53.801 20:27:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 65659 00:28:54.060 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:54.060 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:54.060 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:54.060 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:28:54.060 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:28:54.060 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:54.060 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:28:54.060 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:54.060 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:54.060 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:54.060 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:54.060 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:54.060 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:54.060 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:54.060 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:54.060 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:54.060 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:54.060 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:54.060 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:54.060 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:54.060 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:54.060 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:54.060 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:54.060 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:54.060 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:54.060 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:28:54.319 00:28:54.319 real 0m15.816s 00:28:54.319 user 1m5.715s 00:28:54.319 sys 0m3.783s 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:54.319 ************************************ 00:28:54.319 END TEST nvmf_lvol 00:28:54.319 ************************************ 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:28:54.319 ************************************ 00:28:54.319 START TEST nvmf_lvs_grow 00:28:54.319 ************************************ 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:28:54.319 * Looking for test storage... 00:28:54.319 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:54.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.319 --rc genhtml_branch_coverage=1 00:28:54.319 --rc genhtml_function_coverage=1 00:28:54.319 --rc genhtml_legend=1 00:28:54.319 --rc geninfo_all_blocks=1 00:28:54.319 --rc geninfo_unexecuted_blocks=1 00:28:54.319 00:28:54.319 ' 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:54.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.319 --rc genhtml_branch_coverage=1 00:28:54.319 --rc genhtml_function_coverage=1 00:28:54.319 --rc genhtml_legend=1 00:28:54.319 --rc geninfo_all_blocks=1 00:28:54.319 --rc geninfo_unexecuted_blocks=1 00:28:54.319 00:28:54.319 ' 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:54.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.319 --rc genhtml_branch_coverage=1 00:28:54.319 --rc genhtml_function_coverage=1 00:28:54.319 --rc genhtml_legend=1 00:28:54.319 --rc geninfo_all_blocks=1 00:28:54.319 --rc geninfo_unexecuted_blocks=1 00:28:54.319 00:28:54.319 ' 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:54.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.319 --rc genhtml_branch_coverage=1 00:28:54.319 --rc genhtml_function_coverage=1 00:28:54.319 --rc genhtml_legend=1 00:28:54.319 --rc geninfo_all_blocks=1 00:28:54.319 --rc geninfo_unexecuted_blocks=1 00:28:54.319 00:28:54.319 ' 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:54.319 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:54.578 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:28:54.578 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:28:54.578 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:54.578 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:54.578 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:54.578 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:54.578 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:54.578 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:54.579 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@458 -- # nvmf_veth_init 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:54.579 Cannot find device "nvmf_init_br" 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:54.579 Cannot find device "nvmf_init_br2" 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:54.579 Cannot find device "nvmf_tgt_br" 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:54.579 Cannot find device "nvmf_tgt_br2" 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:54.579 Cannot find device "nvmf_init_br" 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:54.579 Cannot find device "nvmf_init_br2" 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:54.579 Cannot find device "nvmf_tgt_br" 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:54.579 Cannot find device "nvmf_tgt_br2" 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:54.579 Cannot find device "nvmf_br" 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:54.579 Cannot find device "nvmf_init_if" 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:54.579 Cannot find device "nvmf_init_if2" 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:54.579 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:54.579 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:54.579 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:54.580 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:54.580 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:54.580 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:54.580 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:54.580 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:54.580 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:54.580 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:54.839 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:54.839 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:54.839 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:54.839 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:54.839 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:54.839 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:54.839 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:54.839 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:54.839 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:54.839 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:54.839 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:54.839 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:54.839 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:54.839 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:54.839 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:54.839 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:54.839 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:54.839 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:54.839 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:54.839 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:54.839 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:54.839 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:54.839 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:54.839 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:54.839 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:28:54.839 00:28:54.839 --- 10.0.0.3 ping statistics --- 00:28:54.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:54.839 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:28:54.839 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:54.839 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:54.839 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:28:54.839 00:28:54.839 --- 10.0.0.4 ping statistics --- 00:28:54.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:54.839 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:28:54.839 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:54.839 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:54.839 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:28:54.839 00:28:54.839 --- 10.0.0.1 ping statistics --- 00:28:54.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:54.839 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:28:54.839 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:54.839 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:54.839 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:28:54.839 00:28:54.839 --- 10.0.0.2 ping statistics --- 00:28:54.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:54.839 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:28:54.839 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:54.839 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # return 0 00:28:54.839 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:54.839 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:54.839 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:54.839 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:54.839 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:54.839 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:54.839 20:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:54.839 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:28:54.839 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:54.839 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:54.839 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:54.839 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=66220 00:28:54.839 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 66220 00:28:54.839 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 66220 ']' 00:28:54.839 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:54.839 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:54.839 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:54.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:54.839 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:54.839 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:54.839 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:54.839 [2024-10-01 20:27:50.083404] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:28:54.839 [2024-10-01 20:27:50.083509] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:55.098 [2024-10-01 20:27:50.216774] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.098 [2024-10-01 20:27:50.287959] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:55.098 [2024-10-01 20:27:50.288037] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:55.098 [2024-10-01 20:27:50.288059] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:55.098 [2024-10-01 20:27:50.288071] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:55.098 [2024-10-01 20:27:50.288082] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:55.098 [2024-10-01 20:27:50.288144] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:55.357 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:55.357 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:28:55.357 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:55.357 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:55.357 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:55.357 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:55.357 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:55.620 [2024-10-01 20:27:50.810376] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:55.620 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:28:55.620 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:55.620 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:55.620 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:55.620 ************************************ 00:28:55.620 START TEST lvs_grow_clean 00:28:55.620 ************************************ 00:28:55.620 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:28:55.620 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:28:55.620 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:28:55.620 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:28:55.620 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:28:55.620 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:28:55.620 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:28:55.620 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:28:55.620 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:28:55.620 20:27:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:56.186 20:27:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:28:56.187 20:27:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:28:56.444 20:27:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=15192b56-e509-4f08-a35c-7b47733153ea 00:28:56.444 20:27:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 15192b56-e509-4f08-a35c-7b47733153ea 00:28:56.444 20:27:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:28:56.702 20:27:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:28:56.702 20:27:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:28:56.702 20:27:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 15192b56-e509-4f08-a35c-7b47733153ea lvol 150 00:28:56.960 20:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c4d96d7d-9d3e-42a4-bd52-e5a7b4fd0aa3 00:28:56.960 20:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:28:56.960 20:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:28:57.526 [2024-10-01 20:27:52.472180] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:28:57.526 [2024-10-01 20:27:52.472276] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:28:57.526 true 00:28:57.526 20:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 15192b56-e509-4f08-a35c-7b47733153ea 00:28:57.526 20:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:28:57.785 20:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:28:57.785 20:27:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:58.044 20:27:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c4d96d7d-9d3e-42a4-bd52-e5a7b4fd0aa3 00:28:58.302 20:27:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:28:58.560 [2024-10-01 20:27:53.716872] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:58.560 20:27:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:28:58.818 20:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66379 00:28:58.818 20:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:28:58.818 20:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:58.818 20:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66379 /var/tmp/bdevperf.sock 00:28:58.818 20:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 66379 ']' 00:28:58.818 20:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:58.818 20:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:58.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:58.818 20:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:58.818 20:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:58.818 20:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:28:58.818 [2024-10-01 20:27:54.057352] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:28:58.818 [2024-10-01 20:27:54.057440] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66379 ] 00:28:59.076 [2024-10-01 20:27:54.192907] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.076 [2024-10-01 20:27:54.278431] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.334 20:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:59.334 20:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:28:59.334 20:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:28:59.592 Nvme0n1 00:28:59.592 20:27:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:28:59.850 [ 00:28:59.850 { 00:28:59.850 "aliases": [ 00:28:59.850 "c4d96d7d-9d3e-42a4-bd52-e5a7b4fd0aa3" 00:28:59.850 ], 00:28:59.850 "assigned_rate_limits": { 00:28:59.850 "r_mbytes_per_sec": 0, 00:28:59.850 "rw_ios_per_sec": 0, 00:28:59.850 "rw_mbytes_per_sec": 0, 00:28:59.850 "w_mbytes_per_sec": 0 00:28:59.850 }, 00:28:59.850 "block_size": 4096, 00:28:59.850 "claimed": false, 00:28:59.850 "driver_specific": { 00:28:59.850 "mp_policy": "active_passive", 00:28:59.850 "nvme": [ 00:28:59.850 { 00:28:59.850 "ctrlr_data": { 00:28:59.850 "ana_reporting": false, 00:28:59.850 "cntlid": 1, 00:28:59.850 "firmware_revision": "25.01", 00:28:59.850 "model_number": "SPDK bdev Controller", 00:28:59.850 "multi_ctrlr": true, 00:28:59.850 "oacs": { 00:28:59.850 "firmware": 0, 00:28:59.850 "format": 0, 00:28:59.850 "ns_manage": 0, 00:28:59.850 "security": 0 00:28:59.850 }, 00:28:59.850 "serial_number": "SPDK0", 00:28:59.850 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:59.850 "vendor_id": "0x8086" 00:28:59.850 }, 00:28:59.850 "ns_data": { 00:28:59.850 "can_share": true, 00:28:59.850 "id": 1 00:28:59.850 }, 00:28:59.850 "trid": { 00:28:59.850 "adrfam": "IPv4", 00:28:59.850 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:59.850 "traddr": "10.0.0.3", 00:28:59.850 "trsvcid": "4420", 00:28:59.850 "trtype": "TCP" 00:28:59.850 }, 00:28:59.850 "vs": { 00:28:59.850 "nvme_version": "1.3" 00:28:59.850 } 00:28:59.850 } 00:28:59.850 ] 00:28:59.850 }, 00:28:59.850 "memory_domains": [ 00:28:59.850 { 00:28:59.850 "dma_device_id": "system", 00:28:59.850 "dma_device_type": 1 00:28:59.850 } 00:28:59.850 ], 00:28:59.850 "name": "Nvme0n1", 00:28:59.850 "num_blocks": 38912, 00:28:59.850 "numa_id": -1, 00:28:59.850 "product_name": "NVMe disk", 00:28:59.850 "supported_io_types": { 00:28:59.850 "abort": true, 00:28:59.850 "compare": true, 00:28:59.850 "compare_and_write": true, 00:28:59.850 "copy": true, 00:28:59.850 "flush": true, 00:28:59.850 "get_zone_info": false, 00:28:59.850 "nvme_admin": true, 00:28:59.850 "nvme_io": true, 00:28:59.850 "nvme_io_md": false, 00:28:59.850 "nvme_iov_md": false, 00:28:59.850 "read": true, 00:28:59.850 "reset": true, 00:28:59.850 "seek_data": false, 00:28:59.850 "seek_hole": false, 00:28:59.850 "unmap": true, 00:28:59.850 "write": true, 00:28:59.850 "write_zeroes": true, 00:28:59.850 "zcopy": false, 00:28:59.850 "zone_append": false, 00:28:59.850 "zone_management": false 00:28:59.850 }, 00:28:59.850 "uuid": "c4d96d7d-9d3e-42a4-bd52-e5a7b4fd0aa3", 00:28:59.850 "zoned": false 00:28:59.850 } 00:28:59.850 ] 00:29:00.108 20:27:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66414 00:29:00.108 20:27:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:00.108 20:27:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:00.108 Running I/O for 10 seconds... 00:29:01.040 Latency(us) 00:29:01.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.040 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:01.040 Nvme0n1 : 1.00 7635.00 29.82 0.00 0.00 0.00 0.00 0.00 00:29:01.040 =================================================================================================================== 00:29:01.040 Total : 7635.00 29.82 0.00 0.00 0.00 0.00 0.00 00:29:01.040 00:29:01.975 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 15192b56-e509-4f08-a35c-7b47733153ea 00:29:01.975 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:01.975 Nvme0n1 : 2.00 7538.00 29.45 0.00 0.00 0.00 0.00 0.00 00:29:01.975 =================================================================================================================== 00:29:01.975 Total : 7538.00 29.45 0.00 0.00 0.00 0.00 0.00 00:29:01.975 00:29:02.233 true 00:29:02.233 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 15192b56-e509-4f08-a35c-7b47733153ea 00:29:02.233 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:02.798 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:02.799 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:02.799 20:27:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 66414 00:29:03.057 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:03.057 Nvme0n1 : 3.00 7526.00 29.40 0.00 0.00 0.00 0.00 0.00 00:29:03.057 =================================================================================================================== 00:29:03.057 Total : 7526.00 29.40 0.00 0.00 0.00 0.00 0.00 00:29:03.057 00:29:04.014 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:04.014 Nvme0n1 : 4.00 7499.00 29.29 0.00 0.00 0.00 0.00 0.00 00:29:04.014 =================================================================================================================== 00:29:04.014 Total : 7499.00 29.29 0.00 0.00 0.00 0.00 0.00 00:29:04.014 00:29:05.391 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:05.391 Nvme0n1 : 5.00 7377.60 28.82 0.00 0.00 0.00 0.00 0.00 00:29:05.391 =================================================================================================================== 00:29:05.391 Total : 7377.60 28.82 0.00 0.00 0.00 0.00 0.00 00:29:05.391 00:29:06.324 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:06.324 Nvme0n1 : 6.00 7358.17 28.74 0.00 0.00 0.00 0.00 0.00 00:29:06.324 =================================================================================================================== 00:29:06.324 Total : 7358.17 28.74 0.00 0.00 0.00 0.00 0.00 00:29:06.324 00:29:07.258 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:07.258 Nvme0n1 : 7.00 7331.86 28.64 0.00 0.00 0.00 0.00 0.00 00:29:07.258 =================================================================================================================== 00:29:07.258 Total : 7331.86 28.64 0.00 0.00 0.00 0.00 0.00 00:29:07.258 00:29:08.261 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:08.261 Nvme0n1 : 8.00 7329.50 28.63 0.00 0.00 0.00 0.00 0.00 00:29:08.261 =================================================================================================================== 00:29:08.261 Total : 7329.50 28.63 0.00 0.00 0.00 0.00 0.00 00:29:08.261 00:29:09.194 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:09.194 Nvme0n1 : 9.00 7320.56 28.60 0.00 0.00 0.00 0.00 0.00 00:29:09.194 =================================================================================================================== 00:29:09.194 Total : 7320.56 28.60 0.00 0.00 0.00 0.00 0.00 00:29:09.194 00:29:10.129 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:10.129 Nvme0n1 : 10.00 7313.00 28.57 0.00 0.00 0.00 0.00 0.00 00:29:10.129 =================================================================================================================== 00:29:10.129 Total : 7313.00 28.57 0.00 0.00 0.00 0.00 0.00 00:29:10.129 00:29:10.129 00:29:10.129 Latency(us) 00:29:10.129 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.129 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:10.129 Nvme0n1 : 10.01 7318.24 28.59 0.00 0.00 17484.01 8221.79 98184.84 00:29:10.129 =================================================================================================================== 00:29:10.129 Total : 7318.24 28.59 0.00 0.00 17484.01 8221.79 98184.84 00:29:10.129 { 00:29:10.129 "results": [ 00:29:10.129 { 00:29:10.129 "job": "Nvme0n1", 00:29:10.129 "core_mask": "0x2", 00:29:10.129 "workload": "randwrite", 00:29:10.129 "status": "finished", 00:29:10.129 "queue_depth": 128, 00:29:10.129 "io_size": 4096, 00:29:10.129 "runtime": 10.010336, 00:29:10.129 "iops": 7318.235871403317, 00:29:10.129 "mibps": 28.586858872669207, 00:29:10.129 "io_failed": 0, 00:29:10.129 "io_timeout": 0, 00:29:10.129 "avg_latency_us": 17484.01355984702, 00:29:10.129 "min_latency_us": 8221.789090909091, 00:29:10.129 "max_latency_us": 98184.84363636364 00:29:10.129 } 00:29:10.129 ], 00:29:10.129 "core_count": 1 00:29:10.129 } 00:29:10.129 20:28:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66379 00:29:10.129 20:28:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 66379 ']' 00:29:10.130 20:28:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 66379 00:29:10.130 20:28:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:29:10.130 20:28:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:10.130 20:28:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66379 00:29:10.130 killing process with pid 66379 00:29:10.130 Received shutdown signal, test time was about 10.000000 seconds 00:29:10.130 00:29:10.130 Latency(us) 00:29:10.130 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.130 =================================================================================================================== 00:29:10.130 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:10.130 20:28:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:10.130 20:28:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:10.130 20:28:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66379' 00:29:10.130 20:28:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 66379 00:29:10.130 20:28:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 66379 00:29:10.391 20:28:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:29:10.648 20:28:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:11.215 20:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 15192b56-e509-4f08-a35c-7b47733153ea 00:29:11.215 20:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:11.215 20:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:11.215 20:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:29:11.215 20:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:11.473 [2024-10-01 20:28:06.705870] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:11.732 20:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 15192b56-e509-4f08-a35c-7b47733153ea 00:29:11.732 20:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:29:11.732 20:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 15192b56-e509-4f08-a35c-7b47733153ea 00:29:11.732 20:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:11.732 20:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:11.732 20:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:11.732 20:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:11.732 20:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:11.732 20:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:11.732 20:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:11.732 20:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:29:11.732 20:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 15192b56-e509-4f08-a35c-7b47733153ea 00:29:11.990 2024/10/01 20:28:07 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:15192b56-e509-4f08-a35c-7b47733153ea], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:29:11.990 request: 00:29:11.990 { 00:29:11.990 "method": "bdev_lvol_get_lvstores", 00:29:11.990 "params": { 00:29:11.990 "uuid": "15192b56-e509-4f08-a35c-7b47733153ea" 00:29:11.990 } 00:29:11.990 } 00:29:11.990 Got JSON-RPC error response 00:29:11.990 GoRPCClient: error on JSON-RPC call 00:29:11.990 20:28:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:29:11.990 20:28:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:11.990 20:28:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:11.990 20:28:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:11.990 20:28:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:12.248 aio_bdev 00:29:12.248 20:28:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c4d96d7d-9d3e-42a4-bd52-e5a7b4fd0aa3 00:29:12.248 20:28:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=c4d96d7d-9d3e-42a4-bd52-e5a7b4fd0aa3 00:29:12.248 20:28:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:12.248 20:28:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:29:12.248 20:28:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:12.248 20:28:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:12.248 20:28:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:12.507 20:28:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c4d96d7d-9d3e-42a4-bd52-e5a7b4fd0aa3 -t 2000 00:29:12.788 [ 00:29:12.788 { 00:29:12.788 "aliases": [ 00:29:12.789 "lvs/lvol" 00:29:12.789 ], 00:29:12.789 "assigned_rate_limits": { 00:29:12.789 "r_mbytes_per_sec": 0, 00:29:12.789 "rw_ios_per_sec": 0, 00:29:12.789 "rw_mbytes_per_sec": 0, 00:29:12.789 "w_mbytes_per_sec": 0 00:29:12.789 }, 00:29:12.789 "block_size": 4096, 00:29:12.789 "claimed": false, 00:29:12.789 "driver_specific": { 00:29:12.789 "lvol": { 00:29:12.789 "base_bdev": "aio_bdev", 00:29:12.789 "clone": false, 00:29:12.789 "esnap_clone": false, 00:29:12.789 "lvol_store_uuid": "15192b56-e509-4f08-a35c-7b47733153ea", 00:29:12.789 "num_allocated_clusters": 38, 00:29:12.789 "snapshot": false, 00:29:12.789 "thin_provision": false 00:29:12.789 } 00:29:12.789 }, 00:29:12.789 "name": "c4d96d7d-9d3e-42a4-bd52-e5a7b4fd0aa3", 00:29:12.789 "num_blocks": 38912, 00:29:12.789 "product_name": "Logical Volume", 00:29:12.789 "supported_io_types": { 00:29:12.789 "abort": false, 00:29:12.789 "compare": false, 00:29:12.789 "compare_and_write": false, 00:29:12.789 "copy": false, 00:29:12.789 "flush": false, 00:29:12.789 "get_zone_info": false, 00:29:12.789 "nvme_admin": false, 00:29:12.789 "nvme_io": false, 00:29:12.789 "nvme_io_md": false, 00:29:12.789 "nvme_iov_md": false, 00:29:12.789 "read": true, 00:29:12.789 "reset": true, 00:29:12.789 "seek_data": true, 00:29:12.789 "seek_hole": true, 00:29:12.789 "unmap": true, 00:29:12.789 "write": true, 00:29:12.789 "write_zeroes": true, 00:29:12.789 "zcopy": false, 00:29:12.789 "zone_append": false, 00:29:12.789 "zone_management": false 00:29:12.789 }, 00:29:12.789 "uuid": "c4d96d7d-9d3e-42a4-bd52-e5a7b4fd0aa3", 00:29:12.789 "zoned": false 00:29:12.789 } 00:29:12.789 ] 00:29:12.789 20:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:29:12.789 20:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 15192b56-e509-4f08-a35c-7b47733153ea 00:29:12.789 20:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:13.048 20:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:13.048 20:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 15192b56-e509-4f08-a35c-7b47733153ea 00:29:13.048 20:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:13.307 20:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:13.307 20:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete c4d96d7d-9d3e-42a4-bd52-e5a7b4fd0aa3 00:29:13.874 20:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 15192b56-e509-4f08-a35c-7b47733153ea 00:29:13.874 20:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:14.440 20:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:29:14.698 ************************************ 00:29:14.698 END TEST lvs_grow_clean 00:29:14.698 ************************************ 00:29:14.698 00:29:14.698 real 0m19.008s 00:29:14.698 user 0m18.342s 00:29:14.698 sys 0m2.195s 00:29:14.698 20:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:14.698 20:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:14.698 20:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:29:14.698 20:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:14.698 20:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:14.698 20:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:14.698 ************************************ 00:29:14.698 START TEST lvs_grow_dirty 00:29:14.698 ************************************ 00:29:14.698 20:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:29:14.698 20:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:14.698 20:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:14.698 20:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:14.698 20:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:14.698 20:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:14.698 20:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:14.698 20:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:29:14.698 20:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:29:14.698 20:28:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:15.265 20:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:15.265 20:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:15.523 20:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=9cabbd46-ac85-469e-9790-f1aa8405b963 00:29:15.523 20:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:15.523 20:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9cabbd46-ac85-469e-9790-f1aa8405b963 00:29:15.781 20:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:15.781 20:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:15.781 20:28:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9cabbd46-ac85-469e-9790-f1aa8405b963 lvol 150 00:29:16.040 20:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=2bef8068-af78-40fa-8a0c-48301ad1615d 00:29:16.040 20:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:29:16.040 20:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:16.606 [2024-10-01 20:28:11.577190] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:16.606 [2024-10-01 20:28:11.577300] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:16.606 true 00:29:16.606 20:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:16.606 20:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9cabbd46-ac85-469e-9790-f1aa8405b963 00:29:16.866 20:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:16.866 20:28:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:17.125 20:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2bef8068-af78-40fa-8a0c-48301ad1615d 00:29:17.437 20:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:29:17.711 [2024-10-01 20:28:12.765755] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:17.711 20:28:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:29:17.970 20:28:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66825 00:29:17.970 20:28:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:17.970 20:28:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:17.970 20:28:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66825 /var/tmp/bdevperf.sock 00:29:17.970 20:28:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 66825 ']' 00:29:17.970 20:28:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:17.970 20:28:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:17.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:17.970 20:28:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:17.970 20:28:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:17.970 20:28:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:17.971 [2024-10-01 20:28:13.143547] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:29:17.971 [2024-10-01 20:28:13.143644] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66825 ] 00:29:18.230 [2024-10-01 20:28:13.277943] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:18.230 [2024-10-01 20:28:13.349150] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:18.230 20:28:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:18.230 20:28:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:29:18.230 20:28:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:18.795 Nvme0n1 00:29:18.795 20:28:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:19.053 [ 00:29:19.053 { 00:29:19.053 "aliases": [ 00:29:19.053 "2bef8068-af78-40fa-8a0c-48301ad1615d" 00:29:19.053 ], 00:29:19.053 "assigned_rate_limits": { 00:29:19.053 "r_mbytes_per_sec": 0, 00:29:19.053 "rw_ios_per_sec": 0, 00:29:19.053 "rw_mbytes_per_sec": 0, 00:29:19.053 "w_mbytes_per_sec": 0 00:29:19.053 }, 00:29:19.053 "block_size": 4096, 00:29:19.053 "claimed": false, 00:29:19.053 "driver_specific": { 00:29:19.053 "mp_policy": "active_passive", 00:29:19.053 "nvme": [ 00:29:19.053 { 00:29:19.053 "ctrlr_data": { 00:29:19.053 "ana_reporting": false, 00:29:19.053 "cntlid": 1, 00:29:19.053 "firmware_revision": "25.01", 00:29:19.053 "model_number": "SPDK bdev Controller", 00:29:19.053 "multi_ctrlr": true, 00:29:19.053 "oacs": { 00:29:19.053 "firmware": 0, 00:29:19.053 "format": 0, 00:29:19.053 "ns_manage": 0, 00:29:19.053 "security": 0 00:29:19.053 }, 00:29:19.053 "serial_number": "SPDK0", 00:29:19.053 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:19.053 "vendor_id": "0x8086" 00:29:19.053 }, 00:29:19.053 "ns_data": { 00:29:19.053 "can_share": true, 00:29:19.053 "id": 1 00:29:19.053 }, 00:29:19.053 "trid": { 00:29:19.053 "adrfam": "IPv4", 00:29:19.053 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:19.053 "traddr": "10.0.0.3", 00:29:19.054 "trsvcid": "4420", 00:29:19.054 "trtype": "TCP" 00:29:19.054 }, 00:29:19.054 "vs": { 00:29:19.054 "nvme_version": "1.3" 00:29:19.054 } 00:29:19.054 } 00:29:19.054 ] 00:29:19.054 }, 00:29:19.054 "memory_domains": [ 00:29:19.054 { 00:29:19.054 "dma_device_id": "system", 00:29:19.054 "dma_device_type": 1 00:29:19.054 } 00:29:19.054 ], 00:29:19.054 "name": "Nvme0n1", 00:29:19.054 "num_blocks": 38912, 00:29:19.054 "numa_id": -1, 00:29:19.054 "product_name": "NVMe disk", 00:29:19.054 "supported_io_types": { 00:29:19.054 "abort": true, 00:29:19.054 "compare": true, 00:29:19.054 "compare_and_write": true, 00:29:19.054 "copy": true, 00:29:19.054 "flush": true, 00:29:19.054 "get_zone_info": false, 00:29:19.054 "nvme_admin": true, 00:29:19.054 "nvme_io": true, 00:29:19.054 "nvme_io_md": false, 00:29:19.054 "nvme_iov_md": false, 00:29:19.054 "read": true, 00:29:19.054 "reset": true, 00:29:19.054 "seek_data": false, 00:29:19.054 "seek_hole": false, 00:29:19.054 "unmap": true, 00:29:19.054 "write": true, 00:29:19.054 "write_zeroes": true, 00:29:19.054 "zcopy": false, 00:29:19.054 "zone_append": false, 00:29:19.054 "zone_management": false 00:29:19.054 }, 00:29:19.054 "uuid": "2bef8068-af78-40fa-8a0c-48301ad1615d", 00:29:19.054 "zoned": false 00:29:19.054 } 00:29:19.054 ] 00:29:19.054 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66859 00:29:19.054 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:19.054 20:28:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:19.054 Running I/O for 10 seconds... 00:29:20.429 Latency(us) 00:29:20.429 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.429 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:20.429 Nvme0n1 : 1.00 7977.00 31.16 0.00 0.00 0.00 0.00 0.00 00:29:20.429 =================================================================================================================== 00:29:20.429 Total : 7977.00 31.16 0.00 0.00 0.00 0.00 0.00 00:29:20.429 00:29:20.997 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9cabbd46-ac85-469e-9790-f1aa8405b963 00:29:21.255 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:21.255 Nvme0n1 : 2.00 7856.00 30.69 0.00 0.00 0.00 0.00 0.00 00:29:21.255 =================================================================================================================== 00:29:21.255 Total : 7856.00 30.69 0.00 0.00 0.00 0.00 0.00 00:29:21.255 00:29:21.550 true 00:29:21.550 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9cabbd46-ac85-469e-9790-f1aa8405b963 00:29:21.550 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:21.808 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:21.808 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:21.808 20:28:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 66859 00:29:22.066 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:22.066 Nvme0n1 : 3.00 7790.67 30.43 0.00 0.00 0.00 0.00 0.00 00:29:22.066 =================================================================================================================== 00:29:22.066 Total : 7790.67 30.43 0.00 0.00 0.00 0.00 0.00 00:29:22.066 00:29:23.440 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:23.440 Nvme0n1 : 4.00 7731.75 30.20 0.00 0.00 0.00 0.00 0.00 00:29:23.440 =================================================================================================================== 00:29:23.440 Total : 7731.75 30.20 0.00 0.00 0.00 0.00 0.00 00:29:23.440 00:29:24.374 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:24.374 Nvme0n1 : 5.00 7666.80 29.95 0.00 0.00 0.00 0.00 0.00 00:29:24.374 =================================================================================================================== 00:29:24.374 Total : 7666.80 29.95 0.00 0.00 0.00 0.00 0.00 00:29:24.374 00:29:25.309 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:25.309 Nvme0n1 : 6.00 7630.50 29.81 0.00 0.00 0.00 0.00 0.00 00:29:25.309 =================================================================================================================== 00:29:25.309 Total : 7630.50 29.81 0.00 0.00 0.00 0.00 0.00 00:29:25.309 00:29:26.283 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:26.283 Nvme0n1 : 7.00 7065.71 27.60 0.00 0.00 0.00 0.00 0.00 00:29:26.283 =================================================================================================================== 00:29:26.283 Total : 7065.71 27.60 0.00 0.00 0.00 0.00 0.00 00:29:26.283 00:29:27.219 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:27.219 Nvme0n1 : 8.00 7040.50 27.50 0.00 0.00 0.00 0.00 0.00 00:29:27.219 =================================================================================================================== 00:29:27.219 Total : 7040.50 27.50 0.00 0.00 0.00 0.00 0.00 00:29:27.219 00:29:28.154 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:28.154 Nvme0n1 : 9.00 6998.22 27.34 0.00 0.00 0.00 0.00 0.00 00:29:28.154 =================================================================================================================== 00:29:28.154 Total : 6998.22 27.34 0.00 0.00 0.00 0.00 0.00 00:29:28.154 00:29:29.089 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:29.089 Nvme0n1 : 10.00 7025.80 27.44 0.00 0.00 0.00 0.00 0.00 00:29:29.089 =================================================================================================================== 00:29:29.089 Total : 7025.80 27.44 0.00 0.00 0.00 0.00 0.00 00:29:29.089 00:29:29.089 00:29:29.090 Latency(us) 00:29:29.090 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.090 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:29.090 Nvme0n1 : 10.02 7022.95 27.43 0.00 0.00 18220.05 6702.55 474718.95 00:29:29.090 =================================================================================================================== 00:29:29.090 Total : 7022.95 27.43 0.00 0.00 18220.05 6702.55 474718.95 00:29:29.090 { 00:29:29.090 "results": [ 00:29:29.090 { 00:29:29.090 "job": "Nvme0n1", 00:29:29.090 "core_mask": "0x2", 00:29:29.090 "workload": "randwrite", 00:29:29.090 "status": "finished", 00:29:29.090 "queue_depth": 128, 00:29:29.090 "io_size": 4096, 00:29:29.090 "runtime": 10.022291, 00:29:29.090 "iops": 7022.945152959538, 00:29:29.090 "mibps": 27.433379503748196, 00:29:29.090 "io_failed": 0, 00:29:29.090 "io_timeout": 0, 00:29:29.090 "avg_latency_us": 18220.049513565453, 00:29:29.090 "min_latency_us": 6702.545454545455, 00:29:29.090 "max_latency_us": 474718.9527272727 00:29:29.090 } 00:29:29.090 ], 00:29:29.090 "core_count": 1 00:29:29.090 } 00:29:29.090 20:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66825 00:29:29.090 20:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 66825 ']' 00:29:29.090 20:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 66825 00:29:29.090 20:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:29:29.090 20:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:29.090 20:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66825 00:29:29.348 killing process with pid 66825 00:29:29.348 Received shutdown signal, test time was about 10.000000 seconds 00:29:29.348 00:29:29.348 Latency(us) 00:29:29.348 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.348 =================================================================================================================== 00:29:29.348 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:29.348 20:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:29.348 20:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:29.348 20:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66825' 00:29:29.348 20:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 66825 00:29:29.348 20:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 66825 00:29:29.348 20:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:29:29.915 20:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:30.211 20:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9cabbd46-ac85-469e-9790-f1aa8405b963 00:29:30.211 20:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:30.489 20:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:30.489 20:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:29:30.489 20:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 66220 00:29:30.489 20:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 66220 00:29:30.489 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 66220 Killed "${NVMF_APP[@]}" "$@" 00:29:30.489 20:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:29:30.489 20:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:29:30.489 20:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:30.489 20:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:30.489 20:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:30.489 20:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=67033 00:29:30.489 20:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:30.489 20:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 67033 00:29:30.489 20:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 67033 ']' 00:29:30.489 20:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:30.489 20:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:30.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:30.489 20:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:30.489 20:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:30.489 20:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:30.489 [2024-10-01 20:28:25.724169] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:29:30.489 [2024-10-01 20:28:25.724261] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:30.749 [2024-10-01 20:28:25.863302] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.749 [2024-10-01 20:28:25.922275] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:30.749 [2024-10-01 20:28:25.922341] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:30.749 [2024-10-01 20:28:25.922351] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:30.749 [2024-10-01 20:28:25.922360] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:30.749 [2024-10-01 20:28:25.922367] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:30.749 [2024-10-01 20:28:25.922399] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:31.008 20:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:31.008 20:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:29:31.008 20:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:31.008 20:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:31.008 20:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:31.008 20:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:31.008 20:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:31.266 [2024-10-01 20:28:26.342397] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:29:31.266 [2024-10-01 20:28:26.342750] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:29:31.266 [2024-10-01 20:28:26.342940] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:29:31.267 20:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:29:31.267 20:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 2bef8068-af78-40fa-8a0c-48301ad1615d 00:29:31.267 20:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=2bef8068-af78-40fa-8a0c-48301ad1615d 00:29:31.267 20:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:31.267 20:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:29:31.267 20:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:31.267 20:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:31.267 20:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:31.525 20:28:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2bef8068-af78-40fa-8a0c-48301ad1615d -t 2000 00:29:31.782 [ 00:29:31.782 { 00:29:31.782 "aliases": [ 00:29:31.782 "lvs/lvol" 00:29:31.782 ], 00:29:31.782 "assigned_rate_limits": { 00:29:31.782 "r_mbytes_per_sec": 0, 00:29:31.782 "rw_ios_per_sec": 0, 00:29:31.782 "rw_mbytes_per_sec": 0, 00:29:31.782 "w_mbytes_per_sec": 0 00:29:31.782 }, 00:29:31.782 "block_size": 4096, 00:29:31.782 "claimed": false, 00:29:31.782 "driver_specific": { 00:29:31.782 "lvol": { 00:29:31.782 "base_bdev": "aio_bdev", 00:29:31.782 "clone": false, 00:29:31.782 "esnap_clone": false, 00:29:31.782 "lvol_store_uuid": "9cabbd46-ac85-469e-9790-f1aa8405b963", 00:29:31.782 "num_allocated_clusters": 38, 00:29:31.782 "snapshot": false, 00:29:31.782 "thin_provision": false 00:29:31.782 } 00:29:31.782 }, 00:29:31.782 "name": "2bef8068-af78-40fa-8a0c-48301ad1615d", 00:29:31.782 "num_blocks": 38912, 00:29:31.782 "product_name": "Logical Volume", 00:29:31.782 "supported_io_types": { 00:29:31.782 "abort": false, 00:29:31.782 "compare": false, 00:29:31.782 "compare_and_write": false, 00:29:31.782 "copy": false, 00:29:31.782 "flush": false, 00:29:31.782 "get_zone_info": false, 00:29:31.782 "nvme_admin": false, 00:29:31.782 "nvme_io": false, 00:29:31.782 "nvme_io_md": false, 00:29:31.782 "nvme_iov_md": false, 00:29:31.782 "read": true, 00:29:31.782 "reset": true, 00:29:31.782 "seek_data": true, 00:29:31.782 "seek_hole": true, 00:29:31.782 "unmap": true, 00:29:31.782 "write": true, 00:29:31.782 "write_zeroes": true, 00:29:31.782 "zcopy": false, 00:29:31.782 "zone_append": false, 00:29:31.782 "zone_management": false 00:29:31.782 }, 00:29:31.782 "uuid": "2bef8068-af78-40fa-8a0c-48301ad1615d", 00:29:31.782 "zoned": false 00:29:31.782 } 00:29:31.782 ] 00:29:32.041 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:29:32.041 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9cabbd46-ac85-469e-9790-f1aa8405b963 00:29:32.041 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:29:32.299 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:29:32.299 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:29:32.299 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9cabbd46-ac85-469e-9790-f1aa8405b963 00:29:32.556 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:29:32.556 20:28:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:32.815 [2024-10-01 20:28:28.024204] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:33.073 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9cabbd46-ac85-469e-9790-f1aa8405b963 00:29:33.073 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:29:33.073 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9cabbd46-ac85-469e-9790-f1aa8405b963 00:29:33.073 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:33.073 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:33.073 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:33.073 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:33.073 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:33.073 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:33.073 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:33.073 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:29:33.073 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9cabbd46-ac85-469e-9790-f1aa8405b963 00:29:33.331 2024/10/01 20:28:28 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:9cabbd46-ac85-469e-9790-f1aa8405b963], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:29:33.331 request: 00:29:33.331 { 00:29:33.331 "method": "bdev_lvol_get_lvstores", 00:29:33.331 "params": { 00:29:33.331 "uuid": "9cabbd46-ac85-469e-9790-f1aa8405b963" 00:29:33.331 } 00:29:33.331 } 00:29:33.331 Got JSON-RPC error response 00:29:33.331 GoRPCClient: error on JSON-RPC call 00:29:33.331 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:29:33.331 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:33.331 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:33.331 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:33.331 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:33.589 aio_bdev 00:29:33.589 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2bef8068-af78-40fa-8a0c-48301ad1615d 00:29:33.589 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=2bef8068-af78-40fa-8a0c-48301ad1615d 00:29:33.589 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:33.589 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:29:33.589 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:33.589 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:33.589 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:33.847 20:28:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2bef8068-af78-40fa-8a0c-48301ad1615d -t 2000 00:29:34.105 [ 00:29:34.105 { 00:29:34.105 "aliases": [ 00:29:34.105 "lvs/lvol" 00:29:34.105 ], 00:29:34.105 "assigned_rate_limits": { 00:29:34.105 "r_mbytes_per_sec": 0, 00:29:34.105 "rw_ios_per_sec": 0, 00:29:34.105 "rw_mbytes_per_sec": 0, 00:29:34.105 "w_mbytes_per_sec": 0 00:29:34.105 }, 00:29:34.105 "block_size": 4096, 00:29:34.105 "claimed": false, 00:29:34.105 "driver_specific": { 00:29:34.105 "lvol": { 00:29:34.105 "base_bdev": "aio_bdev", 00:29:34.105 "clone": false, 00:29:34.105 "esnap_clone": false, 00:29:34.105 "lvol_store_uuid": "9cabbd46-ac85-469e-9790-f1aa8405b963", 00:29:34.105 "num_allocated_clusters": 38, 00:29:34.105 "snapshot": false, 00:29:34.105 "thin_provision": false 00:29:34.105 } 00:29:34.105 }, 00:29:34.105 "name": "2bef8068-af78-40fa-8a0c-48301ad1615d", 00:29:34.105 "num_blocks": 38912, 00:29:34.105 "product_name": "Logical Volume", 00:29:34.105 "supported_io_types": { 00:29:34.105 "abort": false, 00:29:34.105 "compare": false, 00:29:34.105 "compare_and_write": false, 00:29:34.105 "copy": false, 00:29:34.105 "flush": false, 00:29:34.105 "get_zone_info": false, 00:29:34.105 "nvme_admin": false, 00:29:34.105 "nvme_io": false, 00:29:34.105 "nvme_io_md": false, 00:29:34.105 "nvme_iov_md": false, 00:29:34.105 "read": true, 00:29:34.105 "reset": true, 00:29:34.105 "seek_data": true, 00:29:34.105 "seek_hole": true, 00:29:34.105 "unmap": true, 00:29:34.105 "write": true, 00:29:34.105 "write_zeroes": true, 00:29:34.105 "zcopy": false, 00:29:34.105 "zone_append": false, 00:29:34.105 "zone_management": false 00:29:34.105 }, 00:29:34.105 "uuid": "2bef8068-af78-40fa-8a0c-48301ad1615d", 00:29:34.105 "zoned": false 00:29:34.105 } 00:29:34.105 ] 00:29:34.105 20:28:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:29:34.105 20:28:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9cabbd46-ac85-469e-9790-f1aa8405b963 00:29:34.105 20:28:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:34.363 20:28:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:34.363 20:28:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9cabbd46-ac85-469e-9790-f1aa8405b963 00:29:34.363 20:28:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:34.620 20:28:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:34.620 20:28:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 2bef8068-af78-40fa-8a0c-48301ad1615d 00:29:34.878 20:28:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9cabbd46-ac85-469e-9790-f1aa8405b963 00:29:35.444 20:28:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:35.444 20:28:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:29:36.011 ************************************ 00:29:36.011 END TEST lvs_grow_dirty 00:29:36.011 ************************************ 00:29:36.011 00:29:36.011 real 0m21.232s 00:29:36.011 user 0m44.274s 00:29:36.011 sys 0m7.662s 00:29:36.011 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:36.011 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:36.011 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:29:36.011 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:29:36.011 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:29:36.011 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:29:36.011 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:29:36.011 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:29:36.011 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:29:36.011 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:29:36.011 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:29:36.011 nvmf_trace.0 00:29:36.011 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:29:36.011 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:29:36.011 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:36.011 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:29:36.578 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:36.578 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:29:36.578 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:36.578 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:36.578 rmmod nvme_tcp 00:29:36.578 rmmod nvme_fabrics 00:29:36.578 rmmod nvme_keyring 00:29:36.578 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:36.578 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:29:36.578 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:29:36.578 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 67033 ']' 00:29:36.578 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 67033 00:29:36.578 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 67033 ']' 00:29:36.578 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 67033 00:29:36.578 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:29:36.578 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:36.578 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67033 00:29:36.578 killing process with pid 67033 00:29:36.578 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:36.578 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:36.578 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67033' 00:29:36.578 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 67033 00:29:36.578 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 67033 00:29:36.837 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:36.837 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:36.837 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:36.837 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:29:36.837 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:29:36.837 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:36.837 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:29:36.837 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:36.837 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:29:36.837 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:29:36.837 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:29:36.837 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:29:36.837 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:29:36.837 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:29:36.837 20:28:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:29:36.837 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:29:36.837 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:29:36.837 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:29:36.837 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:29:36.838 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:29:36.838 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:37.095 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:37.095 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:29:37.095 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:37.095 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:37.095 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:37.095 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:29:37.095 00:29:37.095 real 0m42.759s 00:29:37.095 user 1m9.442s 00:29:37.095 sys 0m10.899s 00:29:37.095 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:37.095 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:37.095 ************************************ 00:29:37.095 END TEST nvmf_lvs_grow 00:29:37.095 ************************************ 00:29:37.095 20:28:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:29:37.095 20:28:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:37.095 20:28:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:37.095 20:28:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:29:37.095 ************************************ 00:29:37.095 START TEST nvmf_bdev_io_wait 00:29:37.095 ************************************ 00:29:37.095 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:29:37.095 * Looking for test storage... 00:29:37.095 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:37.095 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:37.095 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:29:37.095 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:37.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.354 --rc genhtml_branch_coverage=1 00:29:37.354 --rc genhtml_function_coverage=1 00:29:37.354 --rc genhtml_legend=1 00:29:37.354 --rc geninfo_all_blocks=1 00:29:37.354 --rc geninfo_unexecuted_blocks=1 00:29:37.354 00:29:37.354 ' 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:37.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.354 --rc genhtml_branch_coverage=1 00:29:37.354 --rc genhtml_function_coverage=1 00:29:37.354 --rc genhtml_legend=1 00:29:37.354 --rc geninfo_all_blocks=1 00:29:37.354 --rc geninfo_unexecuted_blocks=1 00:29:37.354 00:29:37.354 ' 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:37.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.354 --rc genhtml_branch_coverage=1 00:29:37.354 --rc genhtml_function_coverage=1 00:29:37.354 --rc genhtml_legend=1 00:29:37.354 --rc geninfo_all_blocks=1 00:29:37.354 --rc geninfo_unexecuted_blocks=1 00:29:37.354 00:29:37.354 ' 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:37.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.354 --rc genhtml_branch_coverage=1 00:29:37.354 --rc genhtml_function_coverage=1 00:29:37.354 --rc genhtml_legend=1 00:29:37.354 --rc geninfo_all_blocks=1 00:29:37.354 --rc geninfo_unexecuted_blocks=1 00:29:37.354 00:29:37.354 ' 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:37.354 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:37.355 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # nvmf_veth_init 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:29:37.355 Cannot find device "nvmf_init_br" 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:29:37.355 Cannot find device "nvmf_init_br2" 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:29:37.355 Cannot find device "nvmf_tgt_br" 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:29:37.355 Cannot find device "nvmf_tgt_br2" 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:29:37.355 Cannot find device "nvmf_init_br" 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:29:37.355 Cannot find device "nvmf_init_br2" 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:29:37.355 Cannot find device "nvmf_tgt_br" 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:29:37.355 Cannot find device "nvmf_tgt_br2" 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:29:37.355 Cannot find device "nvmf_br" 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:29:37.355 Cannot find device "nvmf_init_if" 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:29:37.355 Cannot find device "nvmf_init_if2" 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:37.355 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:29:37.355 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:37.356 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:37.356 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:29:37.356 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:29:37.356 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:37.356 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:29:37.356 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:37.356 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:37.356 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:37.356 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:37.614 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:37.614 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:29:37.614 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:29:37.614 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:29:37.614 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:29:37.614 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:29:37.614 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:29:37.614 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:29:37.614 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:29:37.614 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:29:37.614 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:37.614 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:37.614 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:37.614 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:29:37.614 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:29:37.614 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:29:37.614 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:29:37.614 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:37.614 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:37.614 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:37.614 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:29:37.614 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:29:37.614 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:29:37.614 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:37.614 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:29:37.614 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:29:37.614 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:37.614 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.103 ms 00:29:37.615 00:29:37.615 --- 10.0.0.3 ping statistics --- 00:29:37.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:37.615 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:29:37.615 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:29:37.615 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:29:37.615 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:29:37.615 00:29:37.615 --- 10.0.0.4 ping statistics --- 00:29:37.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:37.615 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:29:37.615 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:37.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:37.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:29:37.615 00:29:37.615 --- 10.0.0.1 ping statistics --- 00:29:37.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:37.615 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:29:37.615 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:29:37.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:37.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:29:37.615 00:29:37.615 --- 10.0.0.2 ping statistics --- 00:29:37.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:37.615 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:29:37.615 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:37.615 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # return 0 00:29:37.615 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:37.615 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:37.615 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:37.615 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:37.615 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:37.615 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:37.615 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:37.615 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:37.615 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:37.615 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:37.615 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:37.615 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=67497 00:29:37.615 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:37.615 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 67497 00:29:37.615 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 67497 ']' 00:29:37.615 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:37.615 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:37.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:37.615 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:37.615 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:37.615 20:28:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:37.873 [2024-10-01 20:28:32.870852] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:29:37.873 [2024-10-01 20:28:32.870947] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:37.873 [2024-10-01 20:28:33.004548] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:37.873 [2024-10-01 20:28:33.074838] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:37.873 [2024-10-01 20:28:33.075051] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:37.873 [2024-10-01 20:28:33.075166] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:37.873 [2024-10-01 20:28:33.075248] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:37.873 [2024-10-01 20:28:33.075342] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:37.873 [2024-10-01 20:28:33.075553] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:37.873 [2024-10-01 20:28:33.075659] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:37.873 [2024-10-01 20:28:33.075987] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:37.873 [2024-10-01 20:28:33.076002] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:37.873 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:37.873 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:29:37.873 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:37.873 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:37.873 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:38.133 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:38.133 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:29:38.133 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.133 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:38.133 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.133 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:29:38.133 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.133 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:38.133 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.133 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:38.133 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.133 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:38.133 [2024-10-01 20:28:33.196656] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:38.133 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.133 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:38.133 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.133 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:38.133 Malloc0 00:29:38.133 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.133 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:38.133 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.133 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:38.133 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.133 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:38.133 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.133 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:38.133 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.133 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:29:38.133 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.133 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:38.133 [2024-10-01 20:28:33.250964] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:38.133 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.133 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=67536 00:29:38.133 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=67538 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:38.134 { 00:29:38.134 "params": { 00:29:38.134 "name": "Nvme$subsystem", 00:29:38.134 "trtype": "$TEST_TRANSPORT", 00:29:38.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:38.134 "adrfam": "ipv4", 00:29:38.134 "trsvcid": "$NVMF_PORT", 00:29:38.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:38.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:38.134 "hdgst": ${hdgst:-false}, 00:29:38.134 "ddgst": ${ddgst:-false} 00:29:38.134 }, 00:29:38.134 "method": "bdev_nvme_attach_controller" 00:29:38.134 } 00:29:38.134 EOF 00:29:38.134 )") 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=67540 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:38.134 { 00:29:38.134 "params": { 00:29:38.134 "name": "Nvme$subsystem", 00:29:38.134 "trtype": "$TEST_TRANSPORT", 00:29:38.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:38.134 "adrfam": "ipv4", 00:29:38.134 "trsvcid": "$NVMF_PORT", 00:29:38.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:38.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:38.134 "hdgst": ${hdgst:-false}, 00:29:38.134 "ddgst": ${ddgst:-false} 00:29:38.134 }, 00:29:38.134 "method": "bdev_nvme_attach_controller" 00:29:38.134 } 00:29:38.134 EOF 00:29:38.134 )") 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=67543 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:38.134 { 00:29:38.134 "params": { 00:29:38.134 "name": "Nvme$subsystem", 00:29:38.134 "trtype": "$TEST_TRANSPORT", 00:29:38.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:38.134 "adrfam": "ipv4", 00:29:38.134 "trsvcid": "$NVMF_PORT", 00:29:38.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:38.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:38.134 "hdgst": ${hdgst:-false}, 00:29:38.134 "ddgst": ${ddgst:-false} 00:29:38.134 }, 00:29:38.134 "method": "bdev_nvme_attach_controller" 00:29:38.134 } 00:29:38.134 EOF 00:29:38.134 )") 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:38.134 { 00:29:38.134 "params": { 00:29:38.134 "name": "Nvme$subsystem", 00:29:38.134 "trtype": "$TEST_TRANSPORT", 00:29:38.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:38.134 "adrfam": "ipv4", 00:29:38.134 "trsvcid": "$NVMF_PORT", 00:29:38.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:38.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:38.134 "hdgst": ${hdgst:-false}, 00:29:38.134 "ddgst": ${ddgst:-false} 00:29:38.134 }, 00:29:38.134 "method": "bdev_nvme_attach_controller" 00:29:38.134 } 00:29:38.134 EOF 00:29:38.134 )") 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:29:38.134 "params": { 00:29:38.134 "name": "Nvme1", 00:29:38.134 "trtype": "tcp", 00:29:38.134 "traddr": "10.0.0.3", 00:29:38.134 "adrfam": "ipv4", 00:29:38.134 "trsvcid": "4420", 00:29:38.134 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:38.134 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:38.134 "hdgst": false, 00:29:38.134 "ddgst": false 00:29:38.134 }, 00:29:38.134 "method": "bdev_nvme_attach_controller" 00:29:38.134 }' 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:29:38.134 "params": { 00:29:38.134 "name": "Nvme1", 00:29:38.134 "trtype": "tcp", 00:29:38.134 "traddr": "10.0.0.3", 00:29:38.134 "adrfam": "ipv4", 00:29:38.134 "trsvcid": "4420", 00:29:38.134 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:38.134 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:38.134 "hdgst": false, 00:29:38.134 "ddgst": false 00:29:38.134 }, 00:29:38.134 "method": "bdev_nvme_attach_controller" 00:29:38.134 }' 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:29:38.134 "params": { 00:29:38.134 "name": "Nvme1", 00:29:38.134 "trtype": "tcp", 00:29:38.134 "traddr": "10.0.0.3", 00:29:38.134 "adrfam": "ipv4", 00:29:38.134 "trsvcid": "4420", 00:29:38.134 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:38.134 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:38.134 "hdgst": false, 00:29:38.134 "ddgst": false 00:29:38.134 }, 00:29:38.134 "method": "bdev_nvme_attach_controller" 00:29:38.134 }' 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:29:38.134 "params": { 00:29:38.134 "name": "Nvme1", 00:29:38.134 "trtype": "tcp", 00:29:38.134 "traddr": "10.0.0.3", 00:29:38.134 "adrfam": "ipv4", 00:29:38.134 "trsvcid": "4420", 00:29:38.134 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:38.134 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:38.134 "hdgst": false, 00:29:38.134 "ddgst": false 00:29:38.134 }, 00:29:38.134 "method": "bdev_nvme_attach_controller" 00:29:38.134 }' 00:29:38.134 20:28:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 67536 00:29:38.134 [2024-10-01 20:28:33.331568] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:29:38.134 [2024-10-01 20:28:33.331648] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:29:38.134 [2024-10-01 20:28:33.331819] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:29:38.134 [2024-10-01 20:28:33.331901] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:38.134 [2024-10-01 20:28:33.363346] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:29:38.134 [2024-10-01 20:28:33.363495] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:29:38.134 [2024-10-01 20:28:33.379270] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:29:38.134 [2024-10-01 20:28:33.379778] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:29:38.394 [2024-10-01 20:28:33.502440] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.394 [2024-10-01 20:28:33.548677] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.394 [2024-10-01 20:28:33.549175] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:29:38.394 [2024-10-01 20:28:33.588207] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.394 [2024-10-01 20:28:33.601301] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:29:38.394 [2024-10-01 20:28:33.629818] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.652 [2024-10-01 20:28:33.655417] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:29:38.652 [2024-10-01 20:28:33.700900] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:29:38.652 Running I/O for 1 seconds... 00:29:38.652 Running I/O for 1 seconds... 00:29:38.910 Running I/O for 1 seconds... 00:29:38.910 Running I/O for 1 seconds... 00:29:39.845 5984.00 IOPS, 23.38 MiB/s 00:29:39.845 Latency(us) 00:29:39.845 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:39.845 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:29:39.845 Nvme1n1 : 1.03 5941.76 23.21 0.00 0.00 21206.45 8996.31 37891.72 00:29:39.845 =================================================================================================================== 00:29:39.845 Total : 5941.76 23.21 0.00 0.00 21206.45 8996.31 37891.72 00:29:39.845 5751.00 IOPS, 22.46 MiB/s 00:29:39.845 Latency(us) 00:29:39.845 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:39.845 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:29:39.845 Nvme1n1 : 1.01 5856.40 22.88 0.00 0.00 21774.24 6345.08 40274.85 00:29:39.845 =================================================================================================================== 00:29:39.845 Total : 5856.40 22.88 0.00 0.00 21774.24 6345.08 40274.85 00:29:39.845 176552.00 IOPS, 689.66 MiB/s 00:29:39.845 Latency(us) 00:29:39.845 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:39.845 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:29:39.845 Nvme1n1 : 1.00 176193.34 688.26 0.00 0.00 722.25 351.88 4319.42 00:29:39.845 =================================================================================================================== 00:29:39.845 Total : 176193.34 688.26 0.00 0.00 722.25 351.88 4319.42 00:29:39.845 9353.00 IOPS, 36.54 MiB/s 00:29:39.845 Latency(us) 00:29:39.845 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:39.845 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:29:39.845 Nvme1n1 : 1.01 9425.11 36.82 0.00 0.00 13529.06 2457.60 18945.86 00:29:39.845 =================================================================================================================== 00:29:39.845 Total : 9425.11 36.82 0.00 0.00 13529.06 2457.60 18945.86 00:29:39.845 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 67538 00:29:39.845 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 67540 00:29:40.104 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 67543 00:29:40.104 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:40.104 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.104 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:40.104 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.104 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:29:40.104 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:29:40.104 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:40.104 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:29:40.104 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:40.104 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:29:40.104 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:40.104 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:40.104 rmmod nvme_tcp 00:29:40.104 rmmod nvme_fabrics 00:29:40.104 rmmod nvme_keyring 00:29:40.104 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:40.362 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:29:40.362 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:29:40.362 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 67497 ']' 00:29:40.362 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 67497 00:29:40.362 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 67497 ']' 00:29:40.362 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 67497 00:29:40.362 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:29:40.362 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:40.362 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67497 00:29:40.362 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:40.362 killing process with pid 67497 00:29:40.362 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:40.362 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67497' 00:29:40.363 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 67497 00:29:40.363 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 67497 00:29:40.363 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:40.363 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:40.363 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:40.363 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:29:40.363 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:29:40.363 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:29:40.363 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:40.363 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:40.363 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:29:40.363 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:29:40.363 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:29:40.363 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:29:40.621 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:29:40.621 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:29:40.621 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:29:40.621 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:29:40.621 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:29:40.621 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:29:40.621 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:29:40.621 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:29:40.621 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:40.621 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:40.621 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:29:40.621 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:40.621 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:40.621 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:40.621 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:29:40.621 00:29:40.621 real 0m3.622s 00:29:40.621 user 0m14.731s 00:29:40.621 sys 0m2.103s 00:29:40.621 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:40.621 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:40.621 ************************************ 00:29:40.621 END TEST nvmf_bdev_io_wait 00:29:40.621 ************************************ 00:29:40.621 20:28:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:29:40.621 20:28:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:40.621 20:28:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:40.621 20:28:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:29:40.621 ************************************ 00:29:40.621 START TEST nvmf_queue_depth 00:29:40.621 ************************************ 00:29:40.621 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:29:40.880 * Looking for test storage... 00:29:40.880 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:40.880 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:40.880 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:29:40.881 20:28:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:40.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.881 --rc genhtml_branch_coverage=1 00:29:40.881 --rc genhtml_function_coverage=1 00:29:40.881 --rc genhtml_legend=1 00:29:40.881 --rc geninfo_all_blocks=1 00:29:40.881 --rc geninfo_unexecuted_blocks=1 00:29:40.881 00:29:40.881 ' 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:40.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.881 --rc genhtml_branch_coverage=1 00:29:40.881 --rc genhtml_function_coverage=1 00:29:40.881 --rc genhtml_legend=1 00:29:40.881 --rc geninfo_all_blocks=1 00:29:40.881 --rc geninfo_unexecuted_blocks=1 00:29:40.881 00:29:40.881 ' 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:40.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.881 --rc genhtml_branch_coverage=1 00:29:40.881 --rc genhtml_function_coverage=1 00:29:40.881 --rc genhtml_legend=1 00:29:40.881 --rc geninfo_all_blocks=1 00:29:40.881 --rc geninfo_unexecuted_blocks=1 00:29:40.881 00:29:40.881 ' 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:40.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.881 --rc genhtml_branch_coverage=1 00:29:40.881 --rc genhtml_function_coverage=1 00:29:40.881 --rc genhtml_legend=1 00:29:40.881 --rc geninfo_all_blocks=1 00:29:40.881 --rc geninfo_unexecuted_blocks=1 00:29:40.881 00:29:40.881 ' 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:40.881 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:40.881 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@458 -- # nvmf_veth_init 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:29:40.882 Cannot find device "nvmf_init_br" 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:29:40.882 Cannot find device "nvmf_init_br2" 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:29:40.882 Cannot find device "nvmf_tgt_br" 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:29:40.882 Cannot find device "nvmf_tgt_br2" 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:29:40.882 Cannot find device "nvmf_init_br" 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:29:40.882 Cannot find device "nvmf_init_br2" 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:29:40.882 Cannot find device "nvmf_tgt_br" 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:29:40.882 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:29:41.140 Cannot find device "nvmf_tgt_br2" 00:29:41.140 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:29:41.140 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:29:41.140 Cannot find device "nvmf_br" 00:29:41.140 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:29:41.141 Cannot find device "nvmf_init_if" 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:29:41.141 Cannot find device "nvmf_init_if2" 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:41.141 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:41.141 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:29:41.141 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:41.141 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.111 ms 00:29:41.141 00:29:41.141 --- 10.0.0.3 ping statistics --- 00:29:41.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.141 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:29:41.141 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:29:41.141 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:29:41.141 00:29:41.141 --- 10.0.0.4 ping statistics --- 00:29:41.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.141 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:41.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:41.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:29:41.141 00:29:41.141 --- 10.0.0.1 ping statistics --- 00:29:41.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.141 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:29:41.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:41.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:29:41.141 00:29:41.141 --- 10.0.0.2 ping statistics --- 00:29:41.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.141 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # return 0 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:41.141 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:41.400 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:29:41.400 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:41.400 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:41.400 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:41.400 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=67808 00:29:41.400 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:41.400 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 67808 00:29:41.400 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 67808 ']' 00:29:41.400 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.400 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:41.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:41.400 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.400 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:41.400 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:41.400 [2024-10-01 20:28:36.497831] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:29:41.400 [2024-10-01 20:28:36.497942] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:41.400 [2024-10-01 20:28:36.641153] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.658 [2024-10-01 20:28:36.711028] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:41.658 [2024-10-01 20:28:36.711099] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:41.658 [2024-10-01 20:28:36.711126] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:41.658 [2024-10-01 20:28:36.711137] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:41.658 [2024-10-01 20:28:36.711147] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:41.658 [2024-10-01 20:28:36.711182] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:41.658 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:41.658 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:29:41.658 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:41.658 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:41.658 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:41.658 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:41.658 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:41.658 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.658 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:41.917 [2024-10-01 20:28:36.911597] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:41.917 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.917 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:41.917 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.917 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:41.917 Malloc0 00:29:41.917 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.917 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:41.917 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.917 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:41.917 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.917 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:41.917 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.917 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:41.917 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.917 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:29:41.917 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.917 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:41.917 [2024-10-01 20:28:36.963563] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:41.917 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.917 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:29:41.917 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=67845 00:29:41.917 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:41.917 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 67845 /var/tmp/bdevperf.sock 00:29:41.917 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 67845 ']' 00:29:41.917 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:41.917 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:41.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:41.917 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:41.917 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:41.917 20:28:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:41.917 [2024-10-01 20:28:37.020721] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:29:41.917 [2024-10-01 20:28:37.020820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67845 ] 00:29:41.917 [2024-10-01 20:28:37.153487] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.176 [2024-10-01 20:28:37.239592] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.176 20:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:42.176 20:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:29:42.176 20:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:42.176 20:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.176 20:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:42.434 NVMe0n1 00:29:42.434 20:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.434 20:28:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:42.434 Running I/O for 10 seconds... 00:29:52.681 7222.00 IOPS, 28.21 MiB/s 7680.50 IOPS, 30.00 MiB/s 7844.67 IOPS, 30.64 MiB/s 7938.25 IOPS, 31.01 MiB/s 8062.20 IOPS, 31.49 MiB/s 8078.83 IOPS, 31.56 MiB/s 8077.43 IOPS, 31.55 MiB/s 8156.12 IOPS, 31.86 MiB/s 8198.56 IOPS, 32.03 MiB/s 8250.20 IOPS, 32.23 MiB/s 00:29:52.681 Latency(us) 00:29:52.681 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:52.681 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:29:52.681 Verification LBA range: start 0x0 length 0x4000 00:29:52.681 NVMe0n1 : 10.08 8274.26 32.32 0.00 0.00 123147.56 27763.43 118203.11 00:29:52.681 =================================================================================================================== 00:29:52.681 Total : 8274.26 32.32 0.00 0.00 123147.56 27763.43 118203.11 00:29:52.681 { 00:29:52.681 "results": [ 00:29:52.681 { 00:29:52.681 "job": "NVMe0n1", 00:29:52.681 "core_mask": "0x1", 00:29:52.681 "workload": "verify", 00:29:52.681 "status": "finished", 00:29:52.681 "verify_range": { 00:29:52.681 "start": 0, 00:29:52.681 "length": 16384 00:29:52.681 }, 00:29:52.681 "queue_depth": 1024, 00:29:52.681 "io_size": 4096, 00:29:52.681 "runtime": 10.084041, 00:29:52.681 "iops": 8274.262272436219, 00:29:52.681 "mibps": 32.32133700170398, 00:29:52.681 "io_failed": 0, 00:29:52.681 "io_timeout": 0, 00:29:52.681 "avg_latency_us": 123147.56128888298, 00:29:52.681 "min_latency_us": 27763.432727272728, 00:29:52.681 "max_latency_us": 118203.11272727273 00:29:52.681 } 00:29:52.681 ], 00:29:52.681 "core_count": 1 00:29:52.681 } 00:29:52.681 20:28:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 67845 00:29:52.681 20:28:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 67845 ']' 00:29:52.681 20:28:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 67845 00:29:52.681 20:28:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:29:52.681 20:28:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:52.681 20:28:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67845 00:29:52.681 20:28:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:52.681 20:28:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:52.681 killing process with pid 67845 00:29:52.681 20:28:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67845' 00:29:52.681 Received shutdown signal, test time was about 10.000000 seconds 00:29:52.681 00:29:52.681 Latency(us) 00:29:52.681 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:52.681 =================================================================================================================== 00:29:52.681 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:52.681 20:28:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 67845 00:29:52.681 20:28:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 67845 00:29:52.681 20:28:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:29:52.681 20:28:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:29:52.681 20:28:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:52.681 20:28:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:29:52.940 20:28:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:52.940 20:28:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:29:52.940 20:28:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:52.940 20:28:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:52.940 rmmod nvme_tcp 00:29:52.940 rmmod nvme_fabrics 00:29:52.940 rmmod nvme_keyring 00:29:52.940 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:52.940 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:29:52.940 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:29:52.940 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 67808 ']' 00:29:52.940 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 67808 00:29:52.940 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 67808 ']' 00:29:52.940 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 67808 00:29:52.941 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:29:52.941 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:52.941 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67808 00:29:52.941 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:52.941 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:52.941 killing process with pid 67808 00:29:52.941 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67808' 00:29:52.941 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 67808 00:29:52.941 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 67808 00:29:53.200 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:53.200 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:53.200 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:53.200 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:29:53.200 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:53.200 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:29:53.200 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:29:53.200 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:53.200 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:29:53.200 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:29:53.200 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:29:53.200 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:29:53.200 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:29:53.200 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:29:53.200 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:29:53.200 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:29:53.200 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:29:53.200 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:29:53.200 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:29:53.200 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:29:53.200 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:53.200 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:53.459 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:29:53.459 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:53.459 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:53.459 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:53.459 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:29:53.459 00:29:53.459 real 0m12.659s 00:29:53.459 user 0m21.476s 00:29:53.459 sys 0m2.074s 00:29:53.459 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:53.459 ************************************ 00:29:53.459 END TEST nvmf_queue_depth 00:29:53.459 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:53.459 ************************************ 00:29:53.459 20:28:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:29:53.459 20:28:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:53.459 20:28:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:53.459 20:28:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:29:53.459 ************************************ 00:29:53.459 START TEST nvmf_target_multipath 00:29:53.459 ************************************ 00:29:53.459 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:29:53.459 * Looking for test storage... 00:29:53.459 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:53.459 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:53.459 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:29:53.459 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:53.719 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:53.719 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:53.719 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:53.719 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:53.719 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:29:53.719 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:29:53.719 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:29:53.719 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:29:53.719 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:29:53.719 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:29:53.719 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:29:53.719 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:53.719 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:29:53.719 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:29:53.719 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:53.719 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:53.719 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:29:53.719 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:29:53.719 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:53.719 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:29:53.719 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:29:53.719 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:29:53.719 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:29:53.719 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:53.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.720 --rc genhtml_branch_coverage=1 00:29:53.720 --rc genhtml_function_coverage=1 00:29:53.720 --rc genhtml_legend=1 00:29:53.720 --rc geninfo_all_blocks=1 00:29:53.720 --rc geninfo_unexecuted_blocks=1 00:29:53.720 00:29:53.720 ' 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:53.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.720 --rc genhtml_branch_coverage=1 00:29:53.720 --rc genhtml_function_coverage=1 00:29:53.720 --rc genhtml_legend=1 00:29:53.720 --rc geninfo_all_blocks=1 00:29:53.720 --rc geninfo_unexecuted_blocks=1 00:29:53.720 00:29:53.720 ' 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:53.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.720 --rc genhtml_branch_coverage=1 00:29:53.720 --rc genhtml_function_coverage=1 00:29:53.720 --rc genhtml_legend=1 00:29:53.720 --rc geninfo_all_blocks=1 00:29:53.720 --rc geninfo_unexecuted_blocks=1 00:29:53.720 00:29:53.720 ' 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:53.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.720 --rc genhtml_branch_coverage=1 00:29:53.720 --rc genhtml_function_coverage=1 00:29:53.720 --rc genhtml_legend=1 00:29:53.720 --rc geninfo_all_blocks=1 00:29:53.720 --rc geninfo_unexecuted_blocks=1 00:29:53.720 00:29:53.720 ' 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:53.720 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:53.720 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@458 -- # nvmf_veth_init 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:29:53.721 Cannot find device "nvmf_init_br" 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:29:53.721 Cannot find device "nvmf_init_br2" 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:29:53.721 Cannot find device "nvmf_tgt_br" 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:29:53.721 Cannot find device "nvmf_tgt_br2" 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:29:53.721 Cannot find device "nvmf_init_br" 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:29:53.721 Cannot find device "nvmf_init_br2" 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:29:53.721 Cannot find device "nvmf_tgt_br" 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:29:53.721 Cannot find device "nvmf_tgt_br2" 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:29:53.721 Cannot find device "nvmf_br" 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:29:53.721 Cannot find device "nvmf_init_if" 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:29:53.721 Cannot find device "nvmf_init_if2" 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:53.721 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:53.721 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:29:53.721 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:53.979 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:53.979 20:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:29:53.979 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:53.979 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:29:53.979 00:29:53.979 --- 10.0.0.3 ping statistics --- 00:29:53.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.979 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:29:53.979 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:29:53.979 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:29:53.979 00:29:53.979 --- 10.0.0.4 ping statistics --- 00:29:53.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.979 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:53.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:53.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:29:53.979 00:29:53.979 --- 10.0.0.1 ping statistics --- 00:29:53.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.979 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:29:53.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:53.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:29:53.979 00:29:53.979 --- 10.0.0.2 ping statistics --- 00:29:53.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.979 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # return 0 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # nvmfpid=68221 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # waitforlisten 68221 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@831 -- # '[' -z 68221 ']' 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:53.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:53.979 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:54.238 [2024-10-01 20:28:49.298572] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:29:54.238 [2024-10-01 20:28:49.298697] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:54.238 [2024-10-01 20:28:49.450290] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:54.497 [2024-10-01 20:28:49.521319] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:54.497 [2024-10-01 20:28:49.521383] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:54.497 [2024-10-01 20:28:49.521397] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:54.497 [2024-10-01 20:28:49.521407] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:54.497 [2024-10-01 20:28:49.521416] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:54.497 [2024-10-01 20:28:49.522208] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:54.497 [2024-10-01 20:28:49.522266] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:54.497 [2024-10-01 20:28:49.522349] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:54.497 [2024-10-01 20:28:49.522356] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.497 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:54.497 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # return 0 00:29:54.497 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:54.497 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:54.497 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:54.497 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:54.497 20:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:55.066 [2024-10-01 20:28:50.020512] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:55.066 20:28:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:55.326 Malloc0 00:29:55.326 20:28:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:29:55.584 20:28:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:55.843 20:28:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:29:56.101 [2024-10-01 20:28:51.272838] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:56.101 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:29:56.360 [2024-10-01 20:28:51.577071] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:29:56.360 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid=e1b29161-2f6e-4e88-9903-c5274a848703 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:29:56.618 20:28:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid=e1b29161-2f6e-4e88-9903-c5274a848703 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:29:56.877 20:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:29:56.877 20:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:29:56.877 20:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:29:56.877 20:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:29:56.877 20:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:29:58.803 20:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:29:58.803 20:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:29:58.804 20:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:29:59.061 20:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:29:59.061 20:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:29:59.061 20:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:29:59.061 20:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:29:59.061 20:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:29:59.061 20:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:29:59.061 20:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:29:59.061 20:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:29:59.061 20:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:29:59.061 20:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:29:59.061 20:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:29:59.061 20:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:29:59.061 20:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:29:59.061 20:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:29:59.061 20:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:29:59.062 20:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:29:59.062 20:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:29:59.062 20:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:29:59.062 20:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:29:59.062 20:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:29:59.062 20:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:29:59.062 20:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:29:59.062 20:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:29:59.062 20:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:29:59.062 20:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:29:59.062 20:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:29:59.062 20:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:29:59.062 20:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:29:59.062 20:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:29:59.062 20:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=68345 00:29:59.062 20:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:29:59.062 20:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:29:59.062 [global] 00:29:59.062 thread=1 00:29:59.062 invalidate=1 00:29:59.062 rw=randrw 00:29:59.062 time_based=1 00:29:59.062 runtime=6 00:29:59.062 ioengine=libaio 00:29:59.062 direct=1 00:29:59.062 bs=4096 00:29:59.062 iodepth=128 00:29:59.062 norandommap=0 00:29:59.062 numjobs=1 00:29:59.062 00:29:59.062 verify_dump=1 00:29:59.062 verify_backlog=512 00:29:59.062 verify_state_save=0 00:29:59.062 do_verify=1 00:29:59.062 verify=crc32c-intel 00:29:59.062 [job0] 00:29:59.062 filename=/dev/nvme0n1 00:29:59.062 Could not set queue depth (nvme0n1) 00:29:59.062 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:59.062 fio-3.35 00:29:59.062 Starting 1 thread 00:29:59.994 20:28:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:30:00.259 20:28:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:30:00.529 20:28:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:30:00.529 20:28:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:30:00.529 20:28:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:30:00.529 20:28:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:30:00.529 20:28:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:30:00.529 20:28:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:30:00.529 20:28:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:30:00.529 20:28:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:30:00.529 20:28:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:30:00.529 20:28:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:30:00.529 20:28:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:30:00.529 20:28:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:30:00.529 20:28:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:30:01.905 20:28:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:30:01.905 20:28:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:30:01.905 20:28:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:30:01.905 20:28:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:30:01.905 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:30:02.163 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:30:02.163 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:30:02.163 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:30:02.163 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:30:02.163 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:30:02.163 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:30:02.163 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:30:02.163 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:30:02.163 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:30:02.163 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:30:02.163 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:30:02.163 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:30:02.163 20:28:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:30:03.537 20:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:30:03.537 20:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:30:03.537 20:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:30:03.537 20:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 68345 00:30:05.437 00:30:05.437 job0: (groupid=0, jobs=1): err= 0: pid=68372: Tue Oct 1 20:29:00 2024 00:30:05.437 read: IOPS=10.6k, BW=41.4MiB/s (43.5MB/s)(249MiB/6006msec) 00:30:05.437 slat (usec): min=3, max=5991, avg=54.93, stdev=249.65 00:30:05.437 clat (usec): min=763, max=18156, avg=8247.36, stdev=1369.61 00:30:05.437 lat (usec): min=794, max=18188, avg=8302.29, stdev=1381.32 00:30:05.437 clat percentiles (usec): 00:30:05.437 | 1.00th=[ 4817], 5.00th=[ 6194], 10.00th=[ 6980], 20.00th=[ 7439], 00:30:05.437 | 30.00th=[ 7635], 40.00th=[ 7767], 50.00th=[ 8029], 60.00th=[ 8356], 00:30:05.437 | 70.00th=[ 8717], 80.00th=[ 9110], 90.00th=[ 9765], 95.00th=[10683], 00:30:05.437 | 99.00th=[12518], 99.50th=[13042], 99.90th=[15139], 99.95th=[15139], 00:30:05.437 | 99.99th=[16581] 00:30:05.437 bw ( KiB/s): min= 9024, max=27416, per=52.55%, avg=22299.33, stdev=5570.50, samples=12 00:30:05.437 iops : min= 2256, max= 6854, avg=5574.83, stdev=1392.63, samples=12 00:30:05.437 write: IOPS=6231, BW=24.3MiB/s (25.5MB/s)(131MiB/5378msec); 0 zone resets 00:30:05.437 slat (usec): min=12, max=5874, avg=64.34, stdev=164.04 00:30:05.437 clat (usec): min=467, max=15653, avg=7052.63, stdev=1099.82 00:30:05.437 lat (usec): min=521, max=15678, avg=7116.97, stdev=1102.93 00:30:05.437 clat percentiles (usec): 00:30:05.437 | 1.00th=[ 3785], 5.00th=[ 5080], 10.00th=[ 5997], 20.00th=[ 6456], 00:30:05.437 | 30.00th=[ 6718], 40.00th=[ 6915], 50.00th=[ 7111], 60.00th=[ 7242], 00:30:05.437 | 70.00th=[ 7439], 80.00th=[ 7701], 90.00th=[ 8029], 95.00th=[ 8455], 00:30:05.437 | 99.00th=[10552], 99.50th=[11338], 99.90th=[12780], 99.95th=[13304], 00:30:05.437 | 99.99th=[15664] 00:30:05.437 bw ( KiB/s): min= 9400, max=27136, per=89.48%, avg=22304.67, stdev=5303.24, samples=12 00:30:05.437 iops : min= 2350, max= 6784, avg=5576.17, stdev=1325.81, samples=12 00:30:05.437 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:30:05.437 lat (msec) : 2=0.03%, 4=0.65%, 10=93.72%, 20=5.59% 00:30:05.437 cpu : usr=5.50%, sys=22.63%, ctx=6194, majf=0, minf=90 00:30:05.437 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:30:05.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:05.437 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:05.437 issued rwts: total=63715,33512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:05.437 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:05.437 00:30:05.437 Run status group 0 (all jobs): 00:30:05.437 READ: bw=41.4MiB/s (43.5MB/s), 41.4MiB/s-41.4MiB/s (43.5MB/s-43.5MB/s), io=249MiB (261MB), run=6006-6006msec 00:30:05.437 WRITE: bw=24.3MiB/s (25.5MB/s), 24.3MiB/s-24.3MiB/s (25.5MB/s-25.5MB/s), io=131MiB (137MB), run=5378-5378msec 00:30:05.437 00:30:05.437 Disk stats (read/write): 00:30:05.437 nvme0n1: ios=62888/32754, merge=0/0, ticks=486816/215965, in_queue=702781, util=98.63% 00:30:05.437 20:29:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:30:05.437 20:29:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:30:06.002 20:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:30:06.002 20:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:30:06.002 20:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:30:06.002 20:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:30:06.002 20:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:30:06.002 20:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:30:06.002 20:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:30:06.002 20:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:30:06.002 20:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:30:06.002 20:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:30:06.002 20:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:30:06.002 20:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:30:06.002 20:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:30:06.937 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:30:06.937 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:30:06.937 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:30:06.937 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:30:06.937 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=68506 00:30:06.937 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:30:06.937 20:29:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:30:06.937 [global] 00:30:06.937 thread=1 00:30:06.937 invalidate=1 00:30:06.937 rw=randrw 00:30:06.937 time_based=1 00:30:06.937 runtime=6 00:30:06.937 ioengine=libaio 00:30:06.937 direct=1 00:30:06.937 bs=4096 00:30:06.937 iodepth=128 00:30:06.937 norandommap=0 00:30:06.937 numjobs=1 00:30:06.937 00:30:06.937 verify_dump=1 00:30:06.937 verify_backlog=512 00:30:06.937 verify_state_save=0 00:30:06.937 do_verify=1 00:30:06.937 verify=crc32c-intel 00:30:06.937 [job0] 00:30:06.937 filename=/dev/nvme0n1 00:30:06.937 Could not set queue depth (nvme0n1) 00:30:06.937 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:06.937 fio-3.35 00:30:06.937 Starting 1 thread 00:30:07.870 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:30:08.128 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:30:08.385 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:30:08.385 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:30:08.385 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:30:08.385 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:30:08.385 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:30:08.385 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:30:08.385 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:30:08.385 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:30:08.385 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:30:08.385 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:30:08.385 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:30:08.385 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:30:08.385 20:29:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:30:09.758 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:30:09.758 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:30:09.758 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:30:09.758 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:30:09.758 20:29:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:30:10.015 20:29:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:30:10.015 20:29:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:30:10.015 20:29:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:30:10.015 20:29:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:30:10.015 20:29:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:30:10.015 20:29:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:30:10.015 20:29:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:30:10.015 20:29:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:30:10.015 20:29:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:30:10.015 20:29:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:30:10.015 20:29:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:30:10.015 20:29:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:30:10.015 20:29:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:30:10.993 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:30:10.993 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:30:10.993 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:30:10.993 20:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 68506 00:30:13.523 00:30:13.523 job0: (groupid=0, jobs=1): err= 0: pid=68527: Tue Oct 1 20:29:08 2024 00:30:13.523 read: IOPS=11.6k, BW=45.4MiB/s (47.6MB/s)(273MiB/6006msec) 00:30:13.523 slat (usec): min=2, max=4986, avg=43.98, stdev=214.03 00:30:13.523 clat (usec): min=787, max=16609, avg=7533.20, stdev=1841.50 00:30:13.523 lat (usec): min=829, max=16620, avg=7577.18, stdev=1859.99 00:30:13.523 clat percentiles (usec): 00:30:13.523 | 1.00th=[ 2737], 5.00th=[ 3982], 10.00th=[ 4817], 20.00th=[ 6390], 00:30:13.523 | 30.00th=[ 7242], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7898], 00:30:13.523 | 70.00th=[ 8291], 80.00th=[ 8717], 90.00th=[ 9372], 95.00th=[10290], 00:30:13.523 | 99.00th=[12387], 99.50th=[13304], 99.90th=[14746], 99.95th=[15270], 00:30:13.523 | 99.99th=[16319] 00:30:13.523 bw ( KiB/s): min= 4576, max=42848, per=54.49%, avg=25352.67, stdev=10650.92, samples=12 00:30:13.523 iops : min= 1144, max=10712, avg=6338.17, stdev=2662.73, samples=12 00:30:13.523 write: IOPS=7299, BW=28.5MiB/s (29.9MB/s)(149MiB/5218msec); 0 zone resets 00:30:13.523 slat (usec): min=3, max=2450, avg=52.14, stdev=140.99 00:30:13.523 clat (usec): min=691, max=14804, avg=6204.30, stdev=1870.38 00:30:13.523 lat (usec): min=721, max=14874, avg=6256.43, stdev=1886.97 00:30:13.523 clat percentiles (usec): 00:30:13.523 | 1.00th=[ 2147], 5.00th=[ 2868], 10.00th=[ 3425], 20.00th=[ 4424], 00:30:13.523 | 30.00th=[ 5407], 40.00th=[ 6194], 50.00th=[ 6587], 60.00th=[ 6915], 00:30:13.523 | 70.00th=[ 7177], 80.00th=[ 7504], 90.00th=[ 7963], 95.00th=[ 8848], 00:30:13.523 | 99.00th=[11469], 99.50th=[12125], 99.90th=[13435], 99.95th=[14222], 00:30:13.523 | 99.99th=[14615] 00:30:13.523 bw ( KiB/s): min= 5008, max=42464, per=86.85%, avg=25356.00, stdev=10433.54, samples=12 00:30:13.523 iops : min= 1252, max=10616, avg=6339.00, stdev=2608.39, samples=12 00:30:13.523 lat (usec) : 750=0.01%, 1000=0.01% 00:30:13.523 lat (msec) : 2=0.34%, 4=8.54%, 10=86.51%, 20=4.60% 00:30:13.523 cpu : usr=5.46%, sys=23.69%, ctx=7566, majf=0, minf=127 00:30:13.523 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:13.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:13.523 issued rwts: total=69854,38087,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:13.523 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:13.523 00:30:13.523 Run status group 0 (all jobs): 00:30:13.523 READ: bw=45.4MiB/s (47.6MB/s), 45.4MiB/s-45.4MiB/s (47.6MB/s-47.6MB/s), io=273MiB (286MB), run=6006-6006msec 00:30:13.523 WRITE: bw=28.5MiB/s (29.9MB/s), 28.5MiB/s-28.5MiB/s (29.9MB/s-29.9MB/s), io=149MiB (156MB), run=5218-5218msec 00:30:13.523 00:30:13.523 Disk stats (read/write): 00:30:13.523 nvme0n1: ios=68949/37422, merge=0/0, ticks=484088/213947, in_queue=698035, util=98.62% 00:30:13.523 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:13.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:30:13.523 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:13.523 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:30:13.523 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:30:13.523 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:13.523 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:30:13.523 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:13.523 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:30:13.523 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:13.523 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:30:13.523 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:30:13.523 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:30:13.523 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:30:13.523 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:13.523 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:13.523 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:13.523 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:13.523 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:13.523 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:13.523 rmmod nvme_tcp 00:30:13.523 rmmod nvme_fabrics 00:30:13.523 rmmod nvme_keyring 00:30:13.523 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:13.523 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:13.523 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:13.523 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n 68221 ']' 00:30:13.523 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # killprocess 68221 00:30:13.523 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@950 -- # '[' -z 68221 ']' 00:30:13.523 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # kill -0 68221 00:30:13.523 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # uname 00:30:13.523 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:13.523 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68221 00:30:13.782 killing process with pid 68221 00:30:13.782 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:13.782 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:13.782 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68221' 00:30:13.782 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@969 -- # kill 68221 00:30:13.782 20:29:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@974 -- # wait 68221 00:30:13.782 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:13.782 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:13.782 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:13.782 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:13.782 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:30:13.782 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:13.782 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:30:13.782 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:13.782 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:30:13.782 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:30:14.069 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:30:14.069 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:30:14.069 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:30:14.069 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:30:14.069 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:30:14.069 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:30:14.069 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:30:14.069 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:30:14.069 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:30:14.069 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:30:14.069 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:14.069 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:14.069 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:30:14.069 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:14.069 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:14.069 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:14.069 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:30:14.069 00:30:14.069 real 0m20.672s 00:30:14.069 user 1m20.494s 00:30:14.069 sys 0m6.331s 00:30:14.069 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:14.069 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:14.069 ************************************ 00:30:14.069 END TEST nvmf_target_multipath 00:30:14.069 ************************************ 00:30:14.069 20:29:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:30:14.069 20:29:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:14.069 20:29:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:14.069 20:29:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:30:14.069 ************************************ 00:30:14.069 START TEST nvmf_zcopy 00:30:14.069 ************************************ 00:30:14.069 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:30:14.328 * Looking for test storage... 00:30:14.328 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:14.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.328 --rc genhtml_branch_coverage=1 00:30:14.328 --rc genhtml_function_coverage=1 00:30:14.328 --rc genhtml_legend=1 00:30:14.328 --rc geninfo_all_blocks=1 00:30:14.328 --rc geninfo_unexecuted_blocks=1 00:30:14.328 00:30:14.328 ' 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:14.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.328 --rc genhtml_branch_coverage=1 00:30:14.328 --rc genhtml_function_coverage=1 00:30:14.328 --rc genhtml_legend=1 00:30:14.328 --rc geninfo_all_blocks=1 00:30:14.328 --rc geninfo_unexecuted_blocks=1 00:30:14.328 00:30:14.328 ' 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:14.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.328 --rc genhtml_branch_coverage=1 00:30:14.328 --rc genhtml_function_coverage=1 00:30:14.328 --rc genhtml_legend=1 00:30:14.328 --rc geninfo_all_blocks=1 00:30:14.328 --rc geninfo_unexecuted_blocks=1 00:30:14.328 00:30:14.328 ' 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:14.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.328 --rc genhtml_branch_coverage=1 00:30:14.328 --rc genhtml_function_coverage=1 00:30:14.328 --rc genhtml_legend=1 00:30:14.328 --rc geninfo_all_blocks=1 00:30:14.328 --rc geninfo_unexecuted_blocks=1 00:30:14.328 00:30:14.328 ' 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:14.328 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@458 -- # nvmf_veth_init 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:30:14.328 Cannot find device "nvmf_init_br" 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:30:14.328 Cannot find device "nvmf_init_br2" 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:30:14.328 Cannot find device "nvmf_tgt_br" 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:30:14.328 Cannot find device "nvmf_tgt_br2" 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:30:14.328 Cannot find device "nvmf_init_br" 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:30:14.328 Cannot find device "nvmf_init_br2" 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:30:14.328 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:30:14.328 Cannot find device "nvmf_tgt_br" 00:30:14.586 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:30:14.586 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:30:14.587 Cannot find device "nvmf_tgt_br2" 00:30:14.587 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:30:14.587 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:30:14.587 Cannot find device "nvmf_br" 00:30:14.587 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:30:14.587 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:30:14.587 Cannot find device "nvmf_init_if" 00:30:14.587 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:30:14.587 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:30:14.587 Cannot find device "nvmf_init_if2" 00:30:14.587 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:30:14.587 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:14.587 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:14.587 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:30:14.587 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:14.587 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:14.587 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:30:14.587 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:30:14.587 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:14.587 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:30:14.587 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:14.587 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:14.587 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:14.587 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:14.587 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:14.587 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:30:14.587 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:30:14.587 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:30:14.587 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:30:14.587 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:30:14.587 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:30:14.587 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:30:14.587 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:30:14.587 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:30:14.587 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:14.587 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:14.587 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:14.587 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:30:14.587 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:30:14.587 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:30:14.587 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:30:14.587 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:14.587 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:14.846 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:14.846 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:30:14.846 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:30:14.846 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:30:14.846 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:14.846 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:30:14.846 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:30:14.846 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:14.846 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:30:14.846 00:30:14.846 --- 10.0.0.3 ping statistics --- 00:30:14.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:14.846 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:30:14.846 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:30:14.846 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:30:14.846 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:30:14.846 00:30:14.846 --- 10.0.0.4 ping statistics --- 00:30:14.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:14.846 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:30:14.846 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:14.846 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:14.846 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:30:14.846 00:30:14.846 --- 10.0.0.1 ping statistics --- 00:30:14.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:14.846 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:30:14.846 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:30:14.846 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:14.846 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:30:14.846 00:30:14.846 --- 10.0.0.2 ping statistics --- 00:30:14.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:14.846 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:30:14.846 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:14.846 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # return 0 00:30:14.846 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:14.847 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:14.847 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:14.847 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:14.847 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:14.847 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:14.847 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:14.847 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:30:14.847 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:14.847 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:14.847 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:14.847 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=68860 00:30:14.847 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 68860 00:30:14.847 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 68860 ']' 00:30:14.847 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:14.847 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:14.847 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:14.847 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:14.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:14.847 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:14.847 20:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:14.847 [2024-10-01 20:29:09.953008] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:30:14.847 [2024-10-01 20:29:09.953096] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:14.847 [2024-10-01 20:29:10.086614] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:15.106 [2024-10-01 20:29:10.146594] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:15.106 [2024-10-01 20:29:10.146654] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:15.106 [2024-10-01 20:29:10.146666] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:15.106 [2024-10-01 20:29:10.146674] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:15.106 [2024-10-01 20:29:10.146682] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:15.106 [2024-10-01 20:29:10.146711] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:15.106 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:15.106 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:30:15.106 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:15.106 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:15.106 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:15.106 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:15.106 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:30:15.106 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:30:15.106 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.106 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:15.106 [2024-10-01 20:29:10.336021] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:15.106 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.106 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:15.106 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.106 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:15.106 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.106 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:15.106 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.106 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:15.106 [2024-10-01 20:29:10.352182] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:15.106 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.106 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:30:15.106 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.106 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:15.366 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.366 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:30:15.366 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.366 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:15.366 malloc0 00:30:15.366 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.366 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:30:15.366 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.366 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:15.366 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.366 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:30:15.366 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:30:15.366 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:30:15.366 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:30:15.366 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:15.366 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:15.366 { 00:30:15.366 "params": { 00:30:15.366 "name": "Nvme$subsystem", 00:30:15.366 "trtype": "$TEST_TRANSPORT", 00:30:15.366 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:15.366 "adrfam": "ipv4", 00:30:15.366 "trsvcid": "$NVMF_PORT", 00:30:15.366 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:15.366 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:15.366 "hdgst": ${hdgst:-false}, 00:30:15.366 "ddgst": ${ddgst:-false} 00:30:15.366 }, 00:30:15.366 "method": "bdev_nvme_attach_controller" 00:30:15.366 } 00:30:15.366 EOF 00:30:15.366 )") 00:30:15.366 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:30:15.366 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:30:15.366 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:30:15.366 20:29:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:15.366 "params": { 00:30:15.366 "name": "Nvme1", 00:30:15.366 "trtype": "tcp", 00:30:15.366 "traddr": "10.0.0.3", 00:30:15.366 "adrfam": "ipv4", 00:30:15.366 "trsvcid": "4420", 00:30:15.366 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:15.366 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:15.366 "hdgst": false, 00:30:15.366 "ddgst": false 00:30:15.366 }, 00:30:15.366 "method": "bdev_nvme_attach_controller" 00:30:15.366 }' 00:30:15.366 [2024-10-01 20:29:10.437655] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:30:15.366 [2024-10-01 20:29:10.437743] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68893 ] 00:30:15.366 [2024-10-01 20:29:10.571561] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:15.625 [2024-10-01 20:29:10.633351] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:15.625 Running I/O for 10 seconds... 00:30:25.841 5568.00 IOPS, 43.50 MiB/s 5663.00 IOPS, 44.24 MiB/s 5722.67 IOPS, 44.71 MiB/s 5747.25 IOPS, 44.90 MiB/s 5761.80 IOPS, 45.01 MiB/s 5771.50 IOPS, 45.09 MiB/s 5783.29 IOPS, 45.18 MiB/s 5789.75 IOPS, 45.23 MiB/s 5793.67 IOPS, 45.26 MiB/s 5788.90 IOPS, 45.23 MiB/s 00:30:25.841 Latency(us) 00:30:25.841 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:25.841 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:30:25.841 Verification LBA range: start 0x0 length 0x1000 00:30:25.841 Nvme1n1 : 10.02 5792.59 45.25 0.00 0.00 22022.43 3321.48 32648.84 00:30:25.841 =================================================================================================================== 00:30:25.841 Total : 5792.59 45.25 0.00 0.00 22022.43 3321.48 32648.84 00:30:25.841 20:29:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=69015 00:30:25.841 20:29:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:30:25.841 20:29:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:25.841 20:29:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:30:25.841 20:29:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:30:25.841 20:29:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:30:25.841 20:29:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:30:25.841 20:29:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:25.841 20:29:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:25.841 { 00:30:25.841 "params": { 00:30:25.841 "name": "Nvme$subsystem", 00:30:25.841 "trtype": "$TEST_TRANSPORT", 00:30:25.841 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:25.841 "adrfam": "ipv4", 00:30:25.841 "trsvcid": "$NVMF_PORT", 00:30:25.841 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:25.841 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:25.841 "hdgst": ${hdgst:-false}, 00:30:25.841 "ddgst": ${ddgst:-false} 00:30:25.841 }, 00:30:25.841 "method": "bdev_nvme_attach_controller" 00:30:25.841 } 00:30:25.841 EOF 00:30:25.841 )") 00:30:25.842 20:29:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:30:25.842 [2024-10-01 20:29:21.054906] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.842 [2024-10-01 20:29:21.054952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.842 20:29:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:30:25.842 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:25.842 20:29:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:30:25.842 20:29:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:25.842 "params": { 00:30:25.842 "name": "Nvme1", 00:30:25.842 "trtype": "tcp", 00:30:25.842 "traddr": "10.0.0.3", 00:30:25.842 "adrfam": "ipv4", 00:30:25.842 "trsvcid": "4420", 00:30:25.842 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:25.842 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:25.842 "hdgst": false, 00:30:25.842 "ddgst": false 00:30:25.842 }, 00:30:25.842 "method": "bdev_nvme_attach_controller" 00:30:25.842 }' 00:30:25.842 [2024-10-01 20:29:21.066881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.842 [2024-10-01 20:29:21.066912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.842 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:25.842 [2024-10-01 20:29:21.078884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.842 [2024-10-01 20:29:21.078913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:25.842 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:25.842 [2024-10-01 20:29:21.090882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:25.842 [2024-10-01 20:29:21.090910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.101 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.101 [2024-10-01 20:29:21.097585] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:30:26.101 [2024-10-01 20:29:21.097655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69015 ] 00:30:26.101 [2024-10-01 20:29:21.102880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.101 [2024-10-01 20:29:21.102907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.101 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.101 [2024-10-01 20:29:21.114878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.101 [2024-10-01 20:29:21.114903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.101 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.101 [2024-10-01 20:29:21.126880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.101 [2024-10-01 20:29:21.126907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.101 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.101 [2024-10-01 20:29:21.138896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.101 [2024-10-01 20:29:21.138922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.101 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.101 [2024-10-01 20:29:21.150887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.101 [2024-10-01 20:29:21.150913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.101 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.101 [2024-10-01 20:29:21.162890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.101 [2024-10-01 20:29:21.162916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.101 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.101 [2024-10-01 20:29:21.174895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.101 [2024-10-01 20:29:21.174922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.101 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.101 [2024-10-01 20:29:21.186898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.101 [2024-10-01 20:29:21.186924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.101 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.101 [2024-10-01 20:29:21.198917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.101 [2024-10-01 20:29:21.198944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.102 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.102 [2024-10-01 20:29:21.210900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.102 [2024-10-01 20:29:21.210926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.102 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.102 [2024-10-01 20:29:21.218896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.102 [2024-10-01 20:29:21.218922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.102 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.102 [2024-10-01 20:29:21.228176] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:26.102 [2024-10-01 20:29:21.230933] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.102 [2024-10-01 20:29:21.230963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.102 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.102 [2024-10-01 20:29:21.242936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.102 [2024-10-01 20:29:21.242972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.102 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.102 [2024-10-01 20:29:21.254962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.102 [2024-10-01 20:29:21.255006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.102 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.102 [2024-10-01 20:29:21.266943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.102 [2024-10-01 20:29:21.266978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.102 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.102 [2024-10-01 20:29:21.278949] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.102 [2024-10-01 20:29:21.278985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.102 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.102 [2024-10-01 20:29:21.287603] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:26.102 [2024-10-01 20:29:21.290963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.102 [2024-10-01 20:29:21.291004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.102 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.102 [2024-10-01 20:29:21.302952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.102 [2024-10-01 20:29:21.302988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.102 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.102 [2024-10-01 20:29:21.314962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.102 [2024-10-01 20:29:21.314999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.102 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.102 [2024-10-01 20:29:21.326961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.102 [2024-10-01 20:29:21.326996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.102 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.102 [2024-10-01 20:29:21.338999] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.102 [2024-10-01 20:29:21.339038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.102 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.102 [2024-10-01 20:29:21.350977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.102 [2024-10-01 20:29:21.351013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.361 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.361 [2024-10-01 20:29:21.363004] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.361 [2024-10-01 20:29:21.363042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.361 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.361 [2024-10-01 20:29:21.375021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.361 [2024-10-01 20:29:21.375066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.361 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.361 [2024-10-01 20:29:21.386981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.361 [2024-10-01 20:29:21.387015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.361 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.361 [2024-10-01 20:29:21.399016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.361 [2024-10-01 20:29:21.399063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.361 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.361 [2024-10-01 20:29:21.411016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.361 [2024-10-01 20:29:21.411056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.361 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.361 [2024-10-01 20:29:21.423004] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.361 [2024-10-01 20:29:21.423036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.361 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.361 [2024-10-01 20:29:21.435005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.361 [2024-10-01 20:29:21.435034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.361 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.361 [2024-10-01 20:29:21.447016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.361 [2024-10-01 20:29:21.447044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.362 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.362 [2024-10-01 20:29:21.455083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.362 [2024-10-01 20:29:21.455133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.362 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.362 [2024-10-01 20:29:21.467027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.362 [2024-10-01 20:29:21.467056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.362 Running I/O for 5 seconds... 00:30:26.362 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.362 [2024-10-01 20:29:21.478823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.362 [2024-10-01 20:29:21.478859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.362 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.362 [2024-10-01 20:29:21.488390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.362 [2024-10-01 20:29:21.488425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.362 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.362 [2024-10-01 20:29:21.500321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.362 [2024-10-01 20:29:21.500353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.362 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.362 [2024-10-01 20:29:21.511556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.362 [2024-10-01 20:29:21.511591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.362 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.362 [2024-10-01 20:29:21.526516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.362 [2024-10-01 20:29:21.526550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.362 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.362 [2024-10-01 20:29:21.542536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.362 [2024-10-01 20:29:21.542572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.362 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.362 [2024-10-01 20:29:21.553362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.362 [2024-10-01 20:29:21.553398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.362 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.362 [2024-10-01 20:29:21.568509] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.362 [2024-10-01 20:29:21.568543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.362 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.362 [2024-10-01 20:29:21.584616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.362 [2024-10-01 20:29:21.584651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.362 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.362 [2024-10-01 20:29:21.595476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.362 [2024-10-01 20:29:21.595512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.362 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.362 [2024-10-01 20:29:21.610174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.362 [2024-10-01 20:29:21.610207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.362 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.621 [2024-10-01 20:29:21.620807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.621 [2024-10-01 20:29:21.620857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.621 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.621 [2024-10-01 20:29:21.635513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.621 [2024-10-01 20:29:21.635549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.621 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.621 [2024-10-01 20:29:21.650465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.621 [2024-10-01 20:29:21.650498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.621 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.621 [2024-10-01 20:29:21.666089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.621 [2024-10-01 20:29:21.666138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.621 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.621 [2024-10-01 20:29:21.675893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.621 [2024-10-01 20:29:21.675929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.621 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.621 [2024-10-01 20:29:21.688052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.621 [2024-10-01 20:29:21.688086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.621 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.621 [2024-10-01 20:29:21.699087] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.621 [2024-10-01 20:29:21.699131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.621 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.621 [2024-10-01 20:29:21.713921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.621 [2024-10-01 20:29:21.713957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.621 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.621 [2024-10-01 20:29:21.724581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.621 [2024-10-01 20:29:21.724614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.621 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.621 [2024-10-01 20:29:21.735493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.621 [2024-10-01 20:29:21.735534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.621 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.621 [2024-10-01 20:29:21.746527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.621 [2024-10-01 20:29:21.746563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.621 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.621 [2024-10-01 20:29:21.757791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.621 [2024-10-01 20:29:21.757827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.621 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.621 [2024-10-01 20:29:21.771098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.621 [2024-10-01 20:29:21.771145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.621 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.621 [2024-10-01 20:29:21.781436] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.621 [2024-10-01 20:29:21.781470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.621 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.621 [2024-10-01 20:29:21.792481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.621 [2024-10-01 20:29:21.792514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.621 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.621 [2024-10-01 20:29:21.809468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.621 [2024-10-01 20:29:21.809526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.622 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.622 [2024-10-01 20:29:21.825677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.622 [2024-10-01 20:29:21.825721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.622 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.622 [2024-10-01 20:29:21.836565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.622 [2024-10-01 20:29:21.836601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.622 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.622 [2024-10-01 20:29:21.852026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.622 [2024-10-01 20:29:21.852062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.622 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.622 [2024-10-01 20:29:21.862007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.622 [2024-10-01 20:29:21.862041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.622 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.622 [2024-10-01 20:29:21.873582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.622 [2024-10-01 20:29:21.873617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.881 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.881 [2024-10-01 20:29:21.884994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.881 [2024-10-01 20:29:21.885030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.881 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.881 [2024-10-01 20:29:21.899254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.881 [2024-10-01 20:29:21.899290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.881 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.881 [2024-10-01 20:29:21.910191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.881 [2024-10-01 20:29:21.910224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.881 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.882 [2024-10-01 20:29:21.921165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.882 [2024-10-01 20:29:21.921196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.882 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.882 [2024-10-01 20:29:21.932197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.882 [2024-10-01 20:29:21.932252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.882 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.882 [2024-10-01 20:29:21.948120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.882 [2024-10-01 20:29:21.948166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.882 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.882 [2024-10-01 20:29:21.958039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.882 [2024-10-01 20:29:21.958073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.882 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.882 [2024-10-01 20:29:21.970350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.882 [2024-10-01 20:29:21.970383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.882 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.882 [2024-10-01 20:29:21.982065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.882 [2024-10-01 20:29:21.982099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.882 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.882 [2024-10-01 20:29:21.993152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.882 [2024-10-01 20:29:21.993184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.882 2024/10/01 20:29:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.882 [2024-10-01 20:29:22.004160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.882 [2024-10-01 20:29:22.004194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.882 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.882 [2024-10-01 20:29:22.017369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.882 [2024-10-01 20:29:22.017412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.882 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.882 [2024-10-01 20:29:22.027815] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.882 [2024-10-01 20:29:22.027850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.882 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.882 [2024-10-01 20:29:22.039210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.882 [2024-10-01 20:29:22.039244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.882 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.882 [2024-10-01 20:29:22.052654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.882 [2024-10-01 20:29:22.052690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.882 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.882 [2024-10-01 20:29:22.063461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.882 [2024-10-01 20:29:22.063495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.882 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.882 [2024-10-01 20:29:22.074537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.882 [2024-10-01 20:29:22.074572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.882 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.882 [2024-10-01 20:29:22.087043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.882 [2024-10-01 20:29:22.087077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.882 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.882 [2024-10-01 20:29:22.097364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.882 [2024-10-01 20:29:22.097399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.882 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.882 [2024-10-01 20:29:22.108768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.882 [2024-10-01 20:29:22.108803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.882 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.882 [2024-10-01 20:29:22.119789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.882 [2024-10-01 20:29:22.119823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:26.882 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:26.882 [2024-10-01 20:29:22.130890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:26.882 [2024-10-01 20:29:22.130923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.141 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.141 [2024-10-01 20:29:22.144385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.141 [2024-10-01 20:29:22.144421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.141 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.141 [2024-10-01 20:29:22.160444] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.141 [2024-10-01 20:29:22.160477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.141 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.141 [2024-10-01 20:29:22.176898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.141 [2024-10-01 20:29:22.176934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.141 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.141 [2024-10-01 20:29:22.192855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.141 [2024-10-01 20:29:22.192889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.141 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.141 [2024-10-01 20:29:22.210428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.141 [2024-10-01 20:29:22.210466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.142 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.142 [2024-10-01 20:29:22.226530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.142 [2024-10-01 20:29:22.226567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.142 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.142 [2024-10-01 20:29:22.243882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.142 [2024-10-01 20:29:22.243917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.142 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.142 [2024-10-01 20:29:22.260257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.142 [2024-10-01 20:29:22.260291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.142 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.142 [2024-10-01 20:29:22.275999] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.142 [2024-10-01 20:29:22.276032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.142 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.142 [2024-10-01 20:29:22.292566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.142 [2024-10-01 20:29:22.292604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.142 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.142 [2024-10-01 20:29:22.308892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.142 [2024-10-01 20:29:22.308935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.142 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.142 [2024-10-01 20:29:22.325483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.142 [2024-10-01 20:29:22.325518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.142 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.142 [2024-10-01 20:29:22.341351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.142 [2024-10-01 20:29:22.341386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.142 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.142 [2024-10-01 20:29:22.357206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.142 [2024-10-01 20:29:22.357243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.142 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.142 [2024-10-01 20:29:22.373509] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.142 [2024-10-01 20:29:22.373546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.142 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.142 [2024-10-01 20:29:22.390899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.142 [2024-10-01 20:29:22.390934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.401 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.401 [2024-10-01 20:29:22.406807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.401 [2024-10-01 20:29:22.406842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.401 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.401 [2024-10-01 20:29:22.423669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.401 [2024-10-01 20:29:22.423706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.401 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.401 [2024-10-01 20:29:22.439426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.401 [2024-10-01 20:29:22.439461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.401 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.401 [2024-10-01 20:29:22.450559] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.401 [2024-10-01 20:29:22.450593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.401 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.401 [2024-10-01 20:29:22.465476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.401 [2024-10-01 20:29:22.465510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.401 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.401 11221.00 IOPS, 87.66 MiB/s [2024-10-01 20:29:22.483695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.401 [2024-10-01 20:29:22.483729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.401 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.401 [2024-10-01 20:29:22.499440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.401 [2024-10-01 20:29:22.499474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.401 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.401 [2024-10-01 20:29:22.515635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.401 [2024-10-01 20:29:22.515671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.401 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.401 [2024-10-01 20:29:22.526608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.401 [2024-10-01 20:29:22.526642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.401 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.401 [2024-10-01 20:29:22.541946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.401 [2024-10-01 20:29:22.541979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.402 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.402 [2024-10-01 20:29:22.557479] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.402 [2024-10-01 20:29:22.557512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.402 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.402 [2024-10-01 20:29:22.573655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.402 [2024-10-01 20:29:22.573689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.402 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.402 [2024-10-01 20:29:22.589611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.402 [2024-10-01 20:29:22.589645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.402 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.402 [2024-10-01 20:29:22.606273] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.402 [2024-10-01 20:29:22.606307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.402 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.402 [2024-10-01 20:29:22.622132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.402 [2024-10-01 20:29:22.622166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.402 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.402 [2024-10-01 20:29:22.632589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.402 [2024-10-01 20:29:22.632623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.402 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.402 [2024-10-01 20:29:22.648005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.402 [2024-10-01 20:29:22.648039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.402 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.661 [2024-10-01 20:29:22.664177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.661 [2024-10-01 20:29:22.664209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.661 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.661 [2024-10-01 20:29:22.674923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.661 [2024-10-01 20:29:22.674956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.661 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.661 [2024-10-01 20:29:22.690961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.661 [2024-10-01 20:29:22.690995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.661 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.661 [2024-10-01 20:29:22.707555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.661 [2024-10-01 20:29:22.707587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.661 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.661 [2024-10-01 20:29:22.719351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.661 [2024-10-01 20:29:22.719381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.661 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.661 [2024-10-01 20:29:22.735586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.661 [2024-10-01 20:29:22.735649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.661 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.661 [2024-10-01 20:29:22.752897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.661 [2024-10-01 20:29:22.752973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.661 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.661 [2024-10-01 20:29:22.770317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.661 [2024-10-01 20:29:22.770390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.661 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.661 [2024-10-01 20:29:22.787326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.661 [2024-10-01 20:29:22.787396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.661 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.661 [2024-10-01 20:29:22.804120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.661 [2024-10-01 20:29:22.804189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.661 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.661 [2024-10-01 20:29:22.821557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.661 [2024-10-01 20:29:22.821615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.661 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.661 [2024-10-01 20:29:22.838538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.662 [2024-10-01 20:29:22.838595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.662 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.662 [2024-10-01 20:29:22.855055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.662 [2024-10-01 20:29:22.855096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.662 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.662 [2024-10-01 20:29:22.872475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.662 [2024-10-01 20:29:22.872531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.662 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.662 [2024-10-01 20:29:22.889574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.662 [2024-10-01 20:29:22.889614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.662 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.662 [2024-10-01 20:29:22.906308] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.662 [2024-10-01 20:29:22.906372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.662 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.922 [2024-10-01 20:29:22.922834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.922 [2024-10-01 20:29:22.922886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.922 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.922 [2024-10-01 20:29:22.940701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.922 [2024-10-01 20:29:22.940749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.922 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.922 [2024-10-01 20:29:22.957022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.922 [2024-10-01 20:29:22.957071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.922 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.922 [2024-10-01 20:29:22.973229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.922 [2024-10-01 20:29:22.973262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.922 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.922 [2024-10-01 20:29:22.989353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.922 [2024-10-01 20:29:22.989394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.922 2024/10/01 20:29:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.922 [2024-10-01 20:29:23.000292] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.922 [2024-10-01 20:29:23.000331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.922 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.922 [2024-10-01 20:29:23.014986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.922 [2024-10-01 20:29:23.015022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.922 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.922 [2024-10-01 20:29:23.032083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.922 [2024-10-01 20:29:23.032130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.922 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.922 [2024-10-01 20:29:23.048214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.922 [2024-10-01 20:29:23.048250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.922 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.922 [2024-10-01 20:29:23.063745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.922 [2024-10-01 20:29:23.063779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.922 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.922 [2024-10-01 20:29:23.079382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.922 [2024-10-01 20:29:23.079419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.922 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.922 [2024-10-01 20:29:23.090170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.922 [2024-10-01 20:29:23.090202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.922 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.922 [2024-10-01 20:29:23.105178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.922 [2024-10-01 20:29:23.105235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.922 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.923 [2024-10-01 20:29:23.122637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.923 [2024-10-01 20:29:23.122700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.923 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.923 [2024-10-01 20:29:23.139241] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.923 [2024-10-01 20:29:23.139297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.923 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.923 [2024-10-01 20:29:23.155615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.923 [2024-10-01 20:29:23.155675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:27.923 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:27.923 [2024-10-01 20:29:23.172192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:27.923 [2024-10-01 20:29:23.172252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.182 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.182 [2024-10-01 20:29:23.188582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.182 [2024-10-01 20:29:23.188644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.182 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.182 [2024-10-01 20:29:23.204969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.182 [2024-10-01 20:29:23.205030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.183 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.183 [2024-10-01 20:29:23.216093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.183 [2024-10-01 20:29:23.216160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.183 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.183 [2024-10-01 20:29:23.231094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.183 [2024-10-01 20:29:23.231148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.183 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.183 [2024-10-01 20:29:23.247710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.183 [2024-10-01 20:29:23.247757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.183 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.183 [2024-10-01 20:29:23.258638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.183 [2024-10-01 20:29:23.258689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.183 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.183 [2024-10-01 20:29:23.273644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.183 [2024-10-01 20:29:23.273696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.183 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.183 [2024-10-01 20:29:23.291215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.183 [2024-10-01 20:29:23.291267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.183 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.183 [2024-10-01 20:29:23.307236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.183 [2024-10-01 20:29:23.307271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.183 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.183 [2024-10-01 20:29:23.319243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.183 [2024-10-01 20:29:23.319279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.183 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.183 [2024-10-01 20:29:23.336704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.183 [2024-10-01 20:29:23.336737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.183 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.183 [2024-10-01 20:29:23.352880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.183 [2024-10-01 20:29:23.352914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.183 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.183 [2024-10-01 20:29:23.368966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.183 [2024-10-01 20:29:23.369003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.183 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.183 [2024-10-01 20:29:23.385274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.183 [2024-10-01 20:29:23.385307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.183 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.183 [2024-10-01 20:29:23.395992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.183 [2024-10-01 20:29:23.396026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.183 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.183 [2024-10-01 20:29:23.411286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.183 [2024-10-01 20:29:23.411318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.183 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.183 [2024-10-01 20:29:23.427270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.183 [2024-10-01 20:29:23.427304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.183 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.441 [2024-10-01 20:29:23.436918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.441 [2024-10-01 20:29:23.436967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.441 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.441 [2024-10-01 20:29:23.453095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.441 [2024-10-01 20:29:23.453140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.442 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.442 [2024-10-01 20:29:23.469164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.442 [2024-10-01 20:29:23.469212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.442 11203.50 IOPS, 87.53 MiB/s 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.442 [2024-10-01 20:29:23.487561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.442 [2024-10-01 20:29:23.487595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.442 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.442 [2024-10-01 20:29:23.502180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.442 [2024-10-01 20:29:23.502232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.442 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.442 [2024-10-01 20:29:23.518510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.442 [2024-10-01 20:29:23.518555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.442 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.442 [2024-10-01 20:29:23.535336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.442 [2024-10-01 20:29:23.535388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.442 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.442 [2024-10-01 20:29:23.551512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.442 [2024-10-01 20:29:23.551556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.442 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.442 [2024-10-01 20:29:23.569408] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.442 [2024-10-01 20:29:23.569468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.442 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.442 [2024-10-01 20:29:23.585580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.442 [2024-10-01 20:29:23.585637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.442 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.442 [2024-10-01 20:29:23.602124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.442 [2024-10-01 20:29:23.602192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.442 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.442 [2024-10-01 20:29:23.619359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.442 [2024-10-01 20:29:23.619406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.442 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.442 [2024-10-01 20:29:23.635431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.442 [2024-10-01 20:29:23.635493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.442 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.442 [2024-10-01 20:29:23.653538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.442 [2024-10-01 20:29:23.653586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.442 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.442 [2024-10-01 20:29:23.669945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.442 [2024-10-01 20:29:23.669985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.442 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.442 [2024-10-01 20:29:23.685943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.442 [2024-10-01 20:29:23.685979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.442 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.701 [2024-10-01 20:29:23.703272] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.701 [2024-10-01 20:29:23.703308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.701 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.701 [2024-10-01 20:29:23.719555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.701 [2024-10-01 20:29:23.719588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.701 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.701 [2024-10-01 20:29:23.736512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.701 [2024-10-01 20:29:23.736547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.701 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.701 [2024-10-01 20:29:23.752207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.701 [2024-10-01 20:29:23.752241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.701 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.701 [2024-10-01 20:29:23.767938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.701 [2024-10-01 20:29:23.767972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.701 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.701 [2024-10-01 20:29:23.784235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.701 [2024-10-01 20:29:23.784270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.701 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.701 [2024-10-01 20:29:23.800763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.701 [2024-10-01 20:29:23.800797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.701 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.702 [2024-10-01 20:29:23.817845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.702 [2024-10-01 20:29:23.817894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.702 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.702 [2024-10-01 20:29:23.833946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.702 [2024-10-01 20:29:23.833981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.702 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.702 [2024-10-01 20:29:23.850670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.702 [2024-10-01 20:29:23.850719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.702 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.702 [2024-10-01 20:29:23.867914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.702 [2024-10-01 20:29:23.867948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.702 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.702 [2024-10-01 20:29:23.885195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.702 [2024-10-01 20:29:23.885231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.702 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.702 [2024-10-01 20:29:23.901183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.702 [2024-10-01 20:29:23.901218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.702 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.702 [2024-10-01 20:29:23.920359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.702 [2024-10-01 20:29:23.920418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.702 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.702 [2024-10-01 20:29:23.935640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.702 [2024-10-01 20:29:23.935674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.702 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.702 [2024-10-01 20:29:23.946930] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.702 [2024-10-01 20:29:23.946977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.702 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.962 [2024-10-01 20:29:23.962455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.962 [2024-10-01 20:29:23.962503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.962 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.962 [2024-10-01 20:29:23.978685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.962 [2024-10-01 20:29:23.978733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.962 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.962 [2024-10-01 20:29:23.989478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.962 [2024-10-01 20:29:23.989542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.962 2024/10/01 20:29:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.962 [2024-10-01 20:29:24.005923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.962 [2024-10-01 20:29:24.005972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.962 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.962 [2024-10-01 20:29:24.022111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.962 [2024-10-01 20:29:24.022186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.962 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.962 [2024-10-01 20:29:24.038813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.962 [2024-10-01 20:29:24.038847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.962 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.962 [2024-10-01 20:29:24.055151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.962 [2024-10-01 20:29:24.055183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.962 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.962 [2024-10-01 20:29:24.072459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.962 [2024-10-01 20:29:24.072508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.962 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.962 [2024-10-01 20:29:24.088378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.962 [2024-10-01 20:29:24.088439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.962 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.962 [2024-10-01 20:29:24.098563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.962 [2024-10-01 20:29:24.098597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.962 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.962 [2024-10-01 20:29:24.113320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.962 [2024-10-01 20:29:24.113353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.962 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.962 [2024-10-01 20:29:24.129567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.962 [2024-10-01 20:29:24.129616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.962 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.962 [2024-10-01 20:29:24.141809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.962 [2024-10-01 20:29:24.141858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.962 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.962 [2024-10-01 20:29:24.159753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.962 [2024-10-01 20:29:24.159786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.962 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.962 [2024-10-01 20:29:24.174659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.962 [2024-10-01 20:29:24.174708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.962 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.962 [2024-10-01 20:29:24.185205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.962 [2024-10-01 20:29:24.185238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.962 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:28.962 [2024-10-01 20:29:24.200620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:28.962 [2024-10-01 20:29:24.200654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:28.962 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.222 [2024-10-01 20:29:24.216502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.222 [2024-10-01 20:29:24.216537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.222 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.222 [2024-10-01 20:29:24.226575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.222 [2024-10-01 20:29:24.226607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.222 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.222 [2024-10-01 20:29:24.241043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.222 [2024-10-01 20:29:24.241078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.222 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.222 [2024-10-01 20:29:24.251881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.222 [2024-10-01 20:29:24.251913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.222 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.222 [2024-10-01 20:29:24.266938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.222 [2024-10-01 20:29:24.266977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.222 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.222 [2024-10-01 20:29:24.277610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.222 [2024-10-01 20:29:24.277792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.222 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.222 [2024-10-01 20:29:24.293972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.222 [2024-10-01 20:29:24.294013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.222 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.222 [2024-10-01 20:29:24.309747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.222 [2024-10-01 20:29:24.309791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.222 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.222 [2024-10-01 20:29:24.325889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.222 [2024-10-01 20:29:24.325929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.222 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.222 [2024-10-01 20:29:24.336422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.222 [2024-10-01 20:29:24.336587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.222 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.222 [2024-10-01 20:29:24.351874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.222 [2024-10-01 20:29:24.352038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.222 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.222 [2024-10-01 20:29:24.368543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.222 [2024-10-01 20:29:24.368583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.222 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.222 [2024-10-01 20:29:24.384145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.222 [2024-10-01 20:29:24.384183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.222 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.222 [2024-10-01 20:29:24.394863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.222 [2024-10-01 20:29:24.395031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.222 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.222 [2024-10-01 20:29:24.411034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.222 [2024-10-01 20:29:24.411204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.222 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.222 [2024-10-01 20:29:24.426877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.222 [2024-10-01 20:29:24.427035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.222 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.222 [2024-10-01 20:29:24.443565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.222 [2024-10-01 20:29:24.443606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.222 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.222 [2024-10-01 20:29:24.461902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.222 [2024-10-01 20:29:24.462066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.222 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.501 11229.33 IOPS, 87.73 MiB/s [2024-10-01 20:29:24.477305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.501 [2024-10-01 20:29:24.477343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.501 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.501 [2024-10-01 20:29:24.494964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.501 [2024-10-01 20:29:24.495008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.501 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.501 [2024-10-01 20:29:24.511385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.501 [2024-10-01 20:29:24.511458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.501 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.501 [2024-10-01 20:29:24.527584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.501 [2024-10-01 20:29:24.527639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.501 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.501 [2024-10-01 20:29:24.543436] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.501 [2024-10-01 20:29:24.543476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.501 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.501 [2024-10-01 20:29:24.560359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.501 [2024-10-01 20:29:24.560412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.501 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.501 [2024-10-01 20:29:24.576793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.501 [2024-10-01 20:29:24.576849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.501 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.501 [2024-10-01 20:29:24.592932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.501 [2024-10-01 20:29:24.592987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.501 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.501 [2024-10-01 20:29:24.610109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.501 [2024-10-01 20:29:24.610159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.501 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.501 [2024-10-01 20:29:24.626163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.501 [2024-10-01 20:29:24.626227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.501 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.501 [2024-10-01 20:29:24.635630] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.501 [2024-10-01 20:29:24.635682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.501 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.501 [2024-10-01 20:29:24.650192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.501 [2024-10-01 20:29:24.650228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.501 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.501 [2024-10-01 20:29:24.666834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.501 [2024-10-01 20:29:24.666889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.501 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.501 [2024-10-01 20:29:24.683069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.501 [2024-10-01 20:29:24.683121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.501 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.501 [2024-10-01 20:29:24.700014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.501 [2024-10-01 20:29:24.700051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.501 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.501 [2024-10-01 20:29:24.716222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.501 [2024-10-01 20:29:24.716259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.501 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.501 [2024-10-01 20:29:24.732490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.501 [2024-10-01 20:29:24.732541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.794 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.794 [2024-10-01 20:29:24.749803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.794 [2024-10-01 20:29:24.749875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.794 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.794 [2024-10-01 20:29:24.767271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.794 [2024-10-01 20:29:24.767343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.794 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.794 [2024-10-01 20:29:24.783323] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.794 [2024-10-01 20:29:24.783380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.794 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.794 [2024-10-01 20:29:24.801059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.794 [2024-10-01 20:29:24.801105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.794 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.794 [2024-10-01 20:29:24.817292] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.794 [2024-10-01 20:29:24.817333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.794 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.794 [2024-10-01 20:29:24.833905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.794 [2024-10-01 20:29:24.833946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.794 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.794 [2024-10-01 20:29:24.850637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.794 [2024-10-01 20:29:24.850680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.794 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.794 [2024-10-01 20:29:24.867105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.794 [2024-10-01 20:29:24.867158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.794 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.794 [2024-10-01 20:29:24.883295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.794 [2024-10-01 20:29:24.883333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.794 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.794 [2024-10-01 20:29:24.893869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.794 [2024-10-01 20:29:24.893907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.794 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.794 [2024-10-01 20:29:24.908812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.794 [2024-10-01 20:29:24.908852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.794 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.794 [2024-10-01 20:29:24.925168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.794 [2024-10-01 20:29:24.925235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.794 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.794 [2024-10-01 20:29:24.941636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.794 [2024-10-01 20:29:24.941690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.794 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.794 [2024-10-01 20:29:24.958620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.794 [2024-10-01 20:29:24.958655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.794 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.794 [2024-10-01 20:29:24.975544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.794 [2024-10-01 20:29:24.975579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.794 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.794 [2024-10-01 20:29:24.991692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.794 [2024-10-01 20:29:24.991728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.794 2024/10/01 20:29:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.794 [2024-10-01 20:29:25.002480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.794 [2024-10-01 20:29:25.002515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.794 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.794 [2024-10-01 20:29:25.018613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.794 [2024-10-01 20:29:25.018652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.794 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:29.794 [2024-10-01 20:29:25.035207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:29.794 [2024-10-01 20:29:25.035259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:29.794 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.053 [2024-10-01 20:29:25.050585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.053 [2024-10-01 20:29:25.050623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.053 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.053 [2024-10-01 20:29:25.066638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.053 [2024-10-01 20:29:25.066675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.053 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.053 [2024-10-01 20:29:25.082809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.053 [2024-10-01 20:29:25.082863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.053 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.053 [2024-10-01 20:29:25.099158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.053 [2024-10-01 20:29:25.099221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.053 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.053 [2024-10-01 20:29:25.116353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.053 [2024-10-01 20:29:25.116406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.053 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.053 [2024-10-01 20:29:25.133704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.053 [2024-10-01 20:29:25.133769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.053 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.053 [2024-10-01 20:29:25.150324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.053 [2024-10-01 20:29:25.150379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.053 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.053 [2024-10-01 20:29:25.166489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.053 [2024-10-01 20:29:25.166527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.053 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.053 [2024-10-01 20:29:25.182909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.053 [2024-10-01 20:29:25.182965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.053 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.053 [2024-10-01 20:29:25.200947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.053 [2024-10-01 20:29:25.200986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.053 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.053 [2024-10-01 20:29:25.217210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.053 [2024-10-01 20:29:25.217260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.053 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.053 [2024-10-01 20:29:25.232836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.053 [2024-10-01 20:29:25.232891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.053 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.054 [2024-10-01 20:29:25.248898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.054 [2024-10-01 20:29:25.248951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.054 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.054 [2024-10-01 20:29:25.266038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.054 [2024-10-01 20:29:25.266077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.054 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.054 [2024-10-01 20:29:25.282038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.054 [2024-10-01 20:29:25.282092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.054 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.054 [2024-10-01 20:29:25.297900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.054 [2024-10-01 20:29:25.297939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.054 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.320 [2024-10-01 20:29:25.308605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.320 [2024-10-01 20:29:25.308642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.320 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.320 [2024-10-01 20:29:25.323899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.320 [2024-10-01 20:29:25.323936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.320 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.320 [2024-10-01 20:29:25.340688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.320 [2024-10-01 20:29:25.340726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.320 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.320 [2024-10-01 20:29:25.357528] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.320 [2024-10-01 20:29:25.357566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.320 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.320 [2024-10-01 20:29:25.373712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.320 [2024-10-01 20:29:25.373750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.320 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.320 [2024-10-01 20:29:25.383517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.321 [2024-10-01 20:29:25.383555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.321 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.321 [2024-10-01 20:29:25.398464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.321 [2024-10-01 20:29:25.398503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.321 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.321 [2024-10-01 20:29:25.414101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.321 [2024-10-01 20:29:25.414153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.321 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.321 [2024-10-01 20:29:25.429927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.321 [2024-10-01 20:29:25.429965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.321 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.321 [2024-10-01 20:29:25.440069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.321 [2024-10-01 20:29:25.440120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.321 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.321 [2024-10-01 20:29:25.454974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.321 [2024-10-01 20:29:25.455011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.321 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.321 [2024-10-01 20:29:25.470246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.321 [2024-10-01 20:29:25.470284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.321 11203.50 IOPS, 87.53 MiB/s 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.321 [2024-10-01 20:29:25.485161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.321 [2024-10-01 20:29:25.485257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.321 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.321 [2024-10-01 20:29:25.501752] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.321 [2024-10-01 20:29:25.501790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.321 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.321 [2024-10-01 20:29:25.517679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.321 [2024-10-01 20:29:25.517717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.321 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.321 [2024-10-01 20:29:25.528212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.321 [2024-10-01 20:29:25.528250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.321 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.321 [2024-10-01 20:29:25.543340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.321 [2024-10-01 20:29:25.543379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.321 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.321 [2024-10-01 20:29:25.554040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.321 [2024-10-01 20:29:25.554079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.321 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.321 [2024-10-01 20:29:25.569166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.321 [2024-10-01 20:29:25.569201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.582 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.582 [2024-10-01 20:29:25.585431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.582 [2024-10-01 20:29:25.585469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.582 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.582 [2024-10-01 20:29:25.596169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.582 [2024-10-01 20:29:25.596205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.582 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.582 [2024-10-01 20:29:25.611493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.582 [2024-10-01 20:29:25.611531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.582 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.582 [2024-10-01 20:29:25.627603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.582 [2024-10-01 20:29:25.627642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.582 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.582 [2024-10-01 20:29:25.645203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.582 [2024-10-01 20:29:25.645244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.582 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.582 [2024-10-01 20:29:25.660560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.582 [2024-10-01 20:29:25.660598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.582 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.582 [2024-10-01 20:29:25.670940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.582 [2024-10-01 20:29:25.670978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.582 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.582 [2024-10-01 20:29:25.685706] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.582 [2024-10-01 20:29:25.685745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.582 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.582 [2024-10-01 20:29:25.702479] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.582 [2024-10-01 20:29:25.702517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.582 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.582 [2024-10-01 20:29:25.719586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.582 [2024-10-01 20:29:25.719623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.582 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.582 [2024-10-01 20:29:25.735784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.582 [2024-10-01 20:29:25.735822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.582 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.582 [2024-10-01 20:29:25.752193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.582 [2024-10-01 20:29:25.752231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.582 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.582 [2024-10-01 20:29:25.769120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.582 [2024-10-01 20:29:25.769170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.582 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.582 [2024-10-01 20:29:25.785287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.582 [2024-10-01 20:29:25.785326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.582 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.582 [2024-10-01 20:29:25.801386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.582 [2024-10-01 20:29:25.801424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.582 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.582 [2024-10-01 20:29:25.817989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.582 [2024-10-01 20:29:25.818027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.582 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.582 [2024-10-01 20:29:25.834183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.582 [2024-10-01 20:29:25.834223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.842 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.842 [2024-10-01 20:29:25.850224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.842 [2024-10-01 20:29:25.850283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.842 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.842 [2024-10-01 20:29:25.866984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.842 [2024-10-01 20:29:25.867023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.842 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.842 [2024-10-01 20:29:25.881779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.842 [2024-10-01 20:29:25.881819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.842 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.842 [2024-10-01 20:29:25.897982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.842 [2024-10-01 20:29:25.898021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.842 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.842 [2024-10-01 20:29:25.914631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.842 [2024-10-01 20:29:25.914669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.842 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.842 [2024-10-01 20:29:25.930361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.842 [2024-10-01 20:29:25.930400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.842 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.842 [2024-10-01 20:29:25.946652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.842 [2024-10-01 20:29:25.946691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.842 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.842 [2024-10-01 20:29:25.963784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.842 [2024-10-01 20:29:25.963840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.842 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.842 [2024-10-01 20:29:25.980757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.842 [2024-10-01 20:29:25.980795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.842 2024/10/01 20:29:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.842 [2024-10-01 20:29:25.996499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.842 [2024-10-01 20:29:25.996537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.842 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.842 [2024-10-01 20:29:26.012879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.842 [2024-10-01 20:29:26.012918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.842 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.842 [2024-10-01 20:29:26.030467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.842 [2024-10-01 20:29:26.030505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.842 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.842 [2024-10-01 20:29:26.047232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.842 [2024-10-01 20:29:26.047285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.842 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.842 [2024-10-01 20:29:26.064636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.842 [2024-10-01 20:29:26.064674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.842 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:30.842 [2024-10-01 20:29:26.081422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:30.842 [2024-10-01 20:29:26.081461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:30.842 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.101 [2024-10-01 20:29:26.097638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.101 [2024-10-01 20:29:26.097678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.102 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.102 [2024-10-01 20:29:26.114343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.102 [2024-10-01 20:29:26.114410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.102 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.102 [2024-10-01 20:29:26.130663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.102 [2024-10-01 20:29:26.130702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.102 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.102 [2024-10-01 20:29:26.141450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.102 [2024-10-01 20:29:26.141489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.102 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.102 [2024-10-01 20:29:26.156932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.102 [2024-10-01 20:29:26.156971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.102 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.102 [2024-10-01 20:29:26.173455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.102 [2024-10-01 20:29:26.173497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.102 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.102 [2024-10-01 20:29:26.189136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.102 [2024-10-01 20:29:26.189202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.102 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.102 [2024-10-01 20:29:26.204587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.102 [2024-10-01 20:29:26.204623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.102 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.102 [2024-10-01 20:29:26.221667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.102 [2024-10-01 20:29:26.221706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.102 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.102 [2024-10-01 20:29:26.238496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.102 [2024-10-01 20:29:26.238535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.102 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.102 [2024-10-01 20:29:26.254591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.102 [2024-10-01 20:29:26.254631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.102 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.102 [2024-10-01 20:29:26.270972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.102 [2024-10-01 20:29:26.271011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.102 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.102 [2024-10-01 20:29:26.287192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.102 [2024-10-01 20:29:26.287229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.102 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.102 [2024-10-01 20:29:26.304389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.102 [2024-10-01 20:29:26.304428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.102 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.102 [2024-10-01 20:29:26.320641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.102 [2024-10-01 20:29:26.320681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.102 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.102 [2024-10-01 20:29:26.337487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.102 [2024-10-01 20:29:26.337525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.102 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.102 [2024-10-01 20:29:26.354045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.102 [2024-10-01 20:29:26.354084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.361 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.361 [2024-10-01 20:29:26.370525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.361 [2024-10-01 20:29:26.370566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.361 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.361 [2024-10-01 20:29:26.386064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.361 [2024-10-01 20:29:26.386104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.362 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.362 [2024-10-01 20:29:26.401970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.362 [2024-10-01 20:29:26.402009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.362 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.362 [2024-10-01 20:29:26.418293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.362 [2024-10-01 20:29:26.418331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.362 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.362 [2024-10-01 20:29:26.434424] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.362 [2024-10-01 20:29:26.434463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.362 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.362 [2024-10-01 20:29:26.445226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.362 [2024-10-01 20:29:26.445274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.362 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.362 [2024-10-01 20:29:26.460283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.362 [2024-10-01 20:29:26.460321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.362 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.362 11198.60 IOPS, 87.49 MiB/s [2024-10-01 20:29:26.475740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.362 [2024-10-01 20:29:26.475794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.362 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.362 00:30:31.362 Latency(us) 00:30:31.362 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:31.362 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:30:31.362 Nvme1n1 : 5.01 11199.49 87.50 0.00 0.00 11413.70 4468.36 18707.55 00:30:31.362 =================================================================================================================== 00:30:31.362 Total : 11199.49 87.50 0.00 0.00 11413.70 4468.36 18707.55 00:30:31.362 [2024-10-01 20:29:26.485555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.362 [2024-10-01 20:29:26.485591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.362 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.362 [2024-10-01 20:29:26.497565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.362 [2024-10-01 20:29:26.497600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.362 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.362 [2024-10-01 20:29:26.509611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.362 [2024-10-01 20:29:26.509661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.362 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.362 [2024-10-01 20:29:26.521620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.362 [2024-10-01 20:29:26.521668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.362 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.362 [2024-10-01 20:29:26.533610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.362 [2024-10-01 20:29:26.533657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.362 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.362 [2024-10-01 20:29:26.545622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.362 [2024-10-01 20:29:26.545669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.362 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.362 [2024-10-01 20:29:26.557655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.362 [2024-10-01 20:29:26.557713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.362 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.362 [2024-10-01 20:29:26.569628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.362 [2024-10-01 20:29:26.569672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.362 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.362 [2024-10-01 20:29:26.581637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.362 [2024-10-01 20:29:26.581686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.362 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.362 [2024-10-01 20:29:26.593638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.362 [2024-10-01 20:29:26.593684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.362 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.362 [2024-10-01 20:29:26.609624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.362 [2024-10-01 20:29:26.609670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.362 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.621 [2024-10-01 20:29:26.621618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.621 [2024-10-01 20:29:26.621654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.621 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.621 [2024-10-01 20:29:26.633640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.621 [2024-10-01 20:29:26.633687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.621 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.621 [2024-10-01 20:29:26.645637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.621 [2024-10-01 20:29:26.645678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.621 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.621 [2024-10-01 20:29:26.657624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.621 [2024-10-01 20:29:26.657657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.621 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.621 [2024-10-01 20:29:26.669660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.621 [2024-10-01 20:29:26.669699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.621 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.621 [2024-10-01 20:29:26.681654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.621 [2024-10-01 20:29:26.681684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.621 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.621 [2024-10-01 20:29:26.693634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:31.621 [2024-10-01 20:29:26.693679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:31.621 2024/10/01 20:29:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:31.621 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (69015) - No such process 00:30:31.621 20:29:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 69015 00:30:31.621 20:29:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:31.621 20:29:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.621 20:29:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:31.621 20:29:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.621 20:29:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:31.621 20:29:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.621 20:29:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:31.621 delay0 00:30:31.621 20:29:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.621 20:29:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:30:31.621 20:29:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.621 20:29:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:31.621 20:29:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.621 20:29:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:30:31.880 [2024-10-01 20:29:26.897370] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:39.997 Initializing NVMe Controllers 00:30:39.997 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:30:39.997 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:39.997 Initialization complete. Launching workers. 00:30:39.997 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 265, failed: 17095 00:30:39.997 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 17270, failed to submit 90 00:30:39.997 success 17095, unsuccessful 175, failed 0 00:30:39.997 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:30:39.997 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:30:39.997 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:39.997 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:30:39.997 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:39.997 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:30:39.997 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:39.997 20:29:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:39.997 rmmod nvme_tcp 00:30:39.997 rmmod nvme_fabrics 00:30:39.997 rmmod nvme_keyring 00:30:39.997 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:39.997 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:30:39.997 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:30:39.997 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 68860 ']' 00:30:39.997 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 68860 00:30:39.997 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 68860 ']' 00:30:39.997 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 68860 00:30:39.997 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:30:39.997 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:39.997 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68860 00:30:39.997 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:39.997 killing process with pid 68860 00:30:39.997 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:39.997 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68860' 00:30:39.997 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 68860 00:30:39.997 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 68860 00:30:39.997 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:39.997 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:39.997 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:39.997 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:30:39.997 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:30:39.997 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:39.997 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:30:39.997 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:39.997 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:30:39.997 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:30:39.997 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:30:39.997 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:30:39.997 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:30:39.997 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:30:39.997 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:30:39.998 00:30:39.998 real 0m25.232s 00:30:39.998 user 0m40.502s 00:30:39.998 sys 0m6.928s 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:39.998 ************************************ 00:30:39.998 END TEST nvmf_zcopy 00:30:39.998 ************************************ 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:30:39.998 ************************************ 00:30:39.998 START TEST nvmf_nmic 00:30:39.998 ************************************ 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:30:39.998 * Looking for test storage... 00:30:39.998 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:39.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.998 --rc genhtml_branch_coverage=1 00:30:39.998 --rc genhtml_function_coverage=1 00:30:39.998 --rc genhtml_legend=1 00:30:39.998 --rc geninfo_all_blocks=1 00:30:39.998 --rc geninfo_unexecuted_blocks=1 00:30:39.998 00:30:39.998 ' 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:39.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.998 --rc genhtml_branch_coverage=1 00:30:39.998 --rc genhtml_function_coverage=1 00:30:39.998 --rc genhtml_legend=1 00:30:39.998 --rc geninfo_all_blocks=1 00:30:39.998 --rc geninfo_unexecuted_blocks=1 00:30:39.998 00:30:39.998 ' 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:39.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.998 --rc genhtml_branch_coverage=1 00:30:39.998 --rc genhtml_function_coverage=1 00:30:39.998 --rc genhtml_legend=1 00:30:39.998 --rc geninfo_all_blocks=1 00:30:39.998 --rc geninfo_unexecuted_blocks=1 00:30:39.998 00:30:39.998 ' 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:39.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.998 --rc genhtml_branch_coverage=1 00:30:39.998 --rc genhtml_function_coverage=1 00:30:39.998 --rc genhtml_legend=1 00:30:39.998 --rc geninfo_all_blocks=1 00:30:39.998 --rc geninfo_unexecuted_blocks=1 00:30:39.998 00:30:39.998 ' 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:39.998 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@458 -- # nvmf_veth_init 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:30:39.998 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:30:39.999 Cannot find device "nvmf_init_br" 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:30:39.999 Cannot find device "nvmf_init_br2" 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:30:39.999 Cannot find device "nvmf_tgt_br" 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:30:39.999 Cannot find device "nvmf_tgt_br2" 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:30:39.999 Cannot find device "nvmf_init_br" 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:30:39.999 Cannot find device "nvmf_init_br2" 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:30:39.999 Cannot find device "nvmf_tgt_br" 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:30:39.999 Cannot find device "nvmf_tgt_br2" 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:30:39.999 Cannot find device "nvmf_br" 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:30:39.999 Cannot find device "nvmf_init_if" 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:30:39.999 Cannot find device "nvmf_init_if2" 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:39.999 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:39.999 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:30:39.999 20:29:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:30:39.999 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:39.999 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:30:39.999 00:30:39.999 --- 10.0.0.3 ping statistics --- 00:30:39.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:39.999 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:30:39.999 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:30:39.999 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:30:39.999 00:30:39.999 --- 10.0.0.4 ping statistics --- 00:30:39.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:39.999 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:39.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:39.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:30:39.999 00:30:39.999 --- 10.0.0.1 ping statistics --- 00:30:39.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:39.999 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:30:39.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:39.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:30:39.999 00:30:39.999 --- 10.0.0.2 ping statistics --- 00:30:39.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:39.999 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # return 0 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=69399 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 69399 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 69399 ']' 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:39.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:39.999 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:39.999 [2024-10-01 20:29:35.228740] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:30:39.999 [2024-10-01 20:29:35.228852] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:40.287 [2024-10-01 20:29:35.370837] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:40.287 [2024-10-01 20:29:35.440860] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:40.287 [2024-10-01 20:29:35.440927] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:40.287 [2024-10-01 20:29:35.440941] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:40.287 [2024-10-01 20:29:35.440951] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:40.287 [2024-10-01 20:29:35.440961] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:40.287 [2024-10-01 20:29:35.441095] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:40.287 [2024-10-01 20:29:35.441217] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:30:40.287 [2024-10-01 20:29:35.441872] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:30:40.287 [2024-10-01 20:29:35.441885] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:40.545 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:40.545 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:30:40.545 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:40.545 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:40.545 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:40.545 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:40.545 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:40.545 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.545 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:40.545 [2024-10-01 20:29:35.622784] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:40.545 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.545 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:40.545 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.546 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:40.546 Malloc0 00:30:40.546 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.546 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:40.546 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.546 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:40.546 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.546 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:40.546 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.546 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:40.546 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.546 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:40.546 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.546 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:40.546 [2024-10-01 20:29:35.670407] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:40.546 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.546 test case1: single bdev can't be used in multiple subsystems 00:30:40.546 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:30:40.546 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:30:40.546 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.546 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:40.546 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.546 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:30:40.546 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.546 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:40.546 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.546 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:30:40.546 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:30:40.546 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.546 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:40.546 [2024-10-01 20:29:35.694219] bdev.c:8241:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:30:40.546 [2024-10-01 20:29:35.694265] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:30:40.546 [2024-10-01 20:29:35.694279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:40.546 2024/10/01 20:29:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:30:40.546 request: 00:30:40.546 { 00:30:40.546 "method": "nvmf_subsystem_add_ns", 00:30:40.546 "params": { 00:30:40.546 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:30:40.546 "namespace": { 00:30:40.546 "bdev_name": "Malloc0", 00:30:40.546 "no_auto_visible": false 00:30:40.546 } 00:30:40.546 } 00:30:40.546 } 00:30:40.546 Got JSON-RPC error response 00:30:40.546 GoRPCClient: error on JSON-RPC call 00:30:40.546 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:40.546 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:30:40.546 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:30:40.546 Adding namespace failed - expected result. 00:30:40.546 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:30:40.546 test case2: host connect to nvmf target in multiple paths 00:30:40.546 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:30:40.546 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:30:40.546 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.546 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:40.546 [2024-10-01 20:29:35.706324] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:30:40.546 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.546 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid=e1b29161-2f6e-4e88-9903-c5274a848703 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:30:40.805 20:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid=e1b29161-2f6e-4e88-9903-c5274a848703 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:30:40.805 20:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:30:40.805 20:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:30:40.805 20:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:30:40.805 20:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:30:40.805 20:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:30:43.357 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:30:43.357 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:30:43.357 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:30:43.357 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:30:43.357 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:30:43.357 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:30:43.357 20:29:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:30:43.357 [global] 00:30:43.357 thread=1 00:30:43.357 invalidate=1 00:30:43.357 rw=write 00:30:43.357 time_based=1 00:30:43.357 runtime=1 00:30:43.357 ioengine=libaio 00:30:43.357 direct=1 00:30:43.357 bs=4096 00:30:43.357 iodepth=1 00:30:43.357 norandommap=0 00:30:43.357 numjobs=1 00:30:43.357 00:30:43.357 verify_dump=1 00:30:43.357 verify_backlog=512 00:30:43.357 verify_state_save=0 00:30:43.357 do_verify=1 00:30:43.357 verify=crc32c-intel 00:30:43.357 [job0] 00:30:43.357 filename=/dev/nvme0n1 00:30:43.357 Could not set queue depth (nvme0n1) 00:30:43.357 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:43.357 fio-3.35 00:30:43.357 Starting 1 thread 00:30:44.290 00:30:44.290 job0: (groupid=0, jobs=1): err= 0: pid=69495: Tue Oct 1 20:29:39 2024 00:30:44.290 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:30:44.290 slat (nsec): min=14253, max=47598, avg=17688.72, stdev=3220.93 00:30:44.290 clat (usec): min=131, max=372, avg=148.40, stdev=11.89 00:30:44.290 lat (usec): min=147, max=400, avg=166.09, stdev=12.52 00:30:44.290 clat percentiles (usec): 00:30:44.290 | 1.00th=[ 137], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 143], 00:30:44.290 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 149], 00:30:44.290 | 70.00th=[ 151], 80.00th=[ 155], 90.00th=[ 159], 95.00th=[ 163], 00:30:44.290 | 99.00th=[ 178], 99.50th=[ 192], 99.90th=[ 338], 99.95th=[ 359], 00:30:44.290 | 99.99th=[ 371] 00:30:44.290 write: IOPS=3536, BW=13.8MiB/s (14.5MB/s)(13.8MiB/1001msec); 0 zone resets 00:30:44.290 slat (nsec): min=19648, max=91676, avg=24939.88, stdev=5115.43 00:30:44.290 clat (usec): min=83, max=541, avg=109.73, stdev=16.23 00:30:44.290 lat (usec): min=116, max=571, avg=134.67, stdev=17.86 00:30:44.290 clat percentiles (usec): 00:30:44.290 | 1.00th=[ 98], 5.00th=[ 100], 10.00th=[ 101], 20.00th=[ 103], 00:30:44.290 | 30.00th=[ 104], 40.00th=[ 105], 50.00th=[ 108], 60.00th=[ 110], 00:30:44.290 | 70.00th=[ 112], 80.00th=[ 115], 90.00th=[ 121], 95.00th=[ 126], 00:30:44.290 | 99.00th=[ 155], 99.50th=[ 178], 99.90th=[ 347], 99.95th=[ 449], 00:30:44.290 | 99.99th=[ 545] 00:30:44.290 bw ( KiB/s): min=13493, max=13493, per=95.38%, avg=13493.00, stdev= 0.00, samples=1 00:30:44.290 iops : min= 3373, max= 3373, avg=3373.00, stdev= 0.00, samples=1 00:30:44.290 lat (usec) : 100=3.42%, 250=96.39%, 500=0.18%, 750=0.02% 00:30:44.290 cpu : usr=3.40%, sys=10.30%, ctx=6612, majf=0, minf=5 00:30:44.290 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:44.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:44.290 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:44.290 issued rwts: total=3072,3540,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:44.290 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:44.290 00:30:44.290 Run status group 0 (all jobs): 00:30:44.290 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:30:44.290 WRITE: bw=13.8MiB/s (14.5MB/s), 13.8MiB/s-13.8MiB/s (14.5MB/s-14.5MB/s), io=13.8MiB (14.5MB), run=1001-1001msec 00:30:44.290 00:30:44.290 Disk stats (read/write): 00:30:44.290 nvme0n1: ios=2864/3072, merge=0/0, ticks=447/350, in_queue=797, util=91.16% 00:30:44.290 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:44.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:30:44.290 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:44.290 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:30:44.290 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:30:44.290 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:44.290 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:30:44.290 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:44.290 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:30:44.290 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:44.290 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:30:44.290 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:44.290 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:30:44.290 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:44.290 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:30:44.290 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:44.290 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:44.290 rmmod nvme_tcp 00:30:44.290 rmmod nvme_fabrics 00:30:44.290 rmmod nvme_keyring 00:30:44.290 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:44.549 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:30:44.549 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:30:44.549 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 69399 ']' 00:30:44.549 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 69399 00:30:44.549 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 69399 ']' 00:30:44.549 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 69399 00:30:44.549 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:30:44.549 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:44.549 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69399 00:30:44.549 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:44.549 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:44.549 killing process with pid 69399 00:30:44.549 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69399' 00:30:44.549 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 69399 00:30:44.549 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 69399 00:30:44.808 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:44.808 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:44.808 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:44.808 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:30:44.808 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:30:44.808 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:44.808 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:30:44.808 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:44.808 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:30:44.808 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:30:44.808 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:30:44.808 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:30:44.808 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:30:44.808 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:30:44.808 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:30:44.808 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:30:44.808 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:30:44.808 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:30:44.808 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:30:44.808 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:30:44.808 20:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:44.809 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:44.809 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:30:44.809 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:44.809 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:44.809 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:45.067 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:30:45.067 00:30:45.067 real 0m5.501s 00:30:45.067 user 0m17.009s 00:30:45.067 sys 0m1.405s 00:30:45.067 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:45.067 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:45.068 ************************************ 00:30:45.068 END TEST nvmf_nmic 00:30:45.068 ************************************ 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:30:45.068 ************************************ 00:30:45.068 START TEST nvmf_fio_target 00:30:45.068 ************************************ 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:30:45.068 * Looking for test storage... 00:30:45.068 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:45.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.068 --rc genhtml_branch_coverage=1 00:30:45.068 --rc genhtml_function_coverage=1 00:30:45.068 --rc genhtml_legend=1 00:30:45.068 --rc geninfo_all_blocks=1 00:30:45.068 --rc geninfo_unexecuted_blocks=1 00:30:45.068 00:30:45.068 ' 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:45.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.068 --rc genhtml_branch_coverage=1 00:30:45.068 --rc genhtml_function_coverage=1 00:30:45.068 --rc genhtml_legend=1 00:30:45.068 --rc geninfo_all_blocks=1 00:30:45.068 --rc geninfo_unexecuted_blocks=1 00:30:45.068 00:30:45.068 ' 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:45.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.068 --rc genhtml_branch_coverage=1 00:30:45.068 --rc genhtml_function_coverage=1 00:30:45.068 --rc genhtml_legend=1 00:30:45.068 --rc geninfo_all_blocks=1 00:30:45.068 --rc geninfo_unexecuted_blocks=1 00:30:45.068 00:30:45.068 ' 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:45.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.068 --rc genhtml_branch_coverage=1 00:30:45.068 --rc genhtml_function_coverage=1 00:30:45.068 --rc genhtml_legend=1 00:30:45.068 --rc geninfo_all_blocks=1 00:30:45.068 --rc geninfo_unexecuted_blocks=1 00:30:45.068 00:30:45.068 ' 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.068 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:45.069 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@458 -- # nvmf_veth_init 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:45.069 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:30:45.327 Cannot find device "nvmf_init_br" 00:30:45.327 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:30:45.327 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:30:45.327 Cannot find device "nvmf_init_br2" 00:30:45.327 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:30:45.327 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:30:45.327 Cannot find device "nvmf_tgt_br" 00:30:45.327 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:30:45.327 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:30:45.327 Cannot find device "nvmf_tgt_br2" 00:30:45.327 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:30:45.327 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:30:45.327 Cannot find device "nvmf_init_br" 00:30:45.327 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:30:45.327 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:30:45.327 Cannot find device "nvmf_init_br2" 00:30:45.327 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:30:45.327 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:30:45.327 Cannot find device "nvmf_tgt_br" 00:30:45.327 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:30:45.327 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:30:45.327 Cannot find device "nvmf_tgt_br2" 00:30:45.327 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:30:45.328 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:30:45.328 Cannot find device "nvmf_br" 00:30:45.328 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:30:45.328 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:30:45.328 Cannot find device "nvmf_init_if" 00:30:45.328 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:30:45.328 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:30:45.328 Cannot find device "nvmf_init_if2" 00:30:45.328 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:30:45.328 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:45.328 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:45.328 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:30:45.328 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:45.328 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:45.328 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:30:45.328 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:30:45.328 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:45.328 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:30:45.328 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:45.328 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:45.328 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:45.328 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:45.328 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:45.328 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:30:45.328 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:30:45.328 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:30:45.328 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:30:45.328 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:30:45.328 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:30:45.328 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:30:45.328 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:30:45.328 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:30:45.587 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:45.588 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:45.588 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:45.588 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:30:45.588 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:30:45.588 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:30:45.588 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:30:45.588 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:45.588 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:45.588 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:45.588 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:30:45.588 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:30:45.588 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:30:45.588 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:45.588 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:30:45.588 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:30:45.588 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:45.588 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:30:45.588 00:30:45.588 --- 10.0.0.3 ping statistics --- 00:30:45.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.588 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:30:45.588 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:30:45.588 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:30:45.588 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:30:45.588 00:30:45.588 --- 10.0.0.4 ping statistics --- 00:30:45.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.588 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:30:45.588 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:45.588 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:45.588 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:30:45.588 00:30:45.588 --- 10.0.0.1 ping statistics --- 00:30:45.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.588 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:30:45.588 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:30:45.588 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:45.588 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:30:45.588 00:30:45.588 --- 10.0.0.2 ping statistics --- 00:30:45.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.588 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:30:45.588 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:45.588 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # return 0 00:30:45.588 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:45.588 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:45.588 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:45.588 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:45.588 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:45.588 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:45.588 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:45.588 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:30:45.588 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:45.588 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:45.588 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:45.588 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=69731 00:30:45.588 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 69731 00:30:45.588 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:45.588 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 69731 ']' 00:30:45.588 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:45.588 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:45.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:45.588 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:45.588 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:45.588 20:29:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:45.588 [2024-10-01 20:29:40.795643] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:30:45.588 [2024-10-01 20:29:40.795738] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:45.846 [2024-10-01 20:29:40.933574] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:45.847 [2024-10-01 20:29:40.993219] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:45.847 [2024-10-01 20:29:40.993283] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:45.847 [2024-10-01 20:29:40.993302] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:45.847 [2024-10-01 20:29:40.993314] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:45.847 [2024-10-01 20:29:40.993325] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:45.847 [2024-10-01 20:29:40.993511] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:45.847 [2024-10-01 20:29:40.993674] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:30:45.847 [2024-10-01 20:29:40.994397] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:30:45.847 [2024-10-01 20:29:40.994401] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:46.104 20:29:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:46.104 20:29:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:30:46.104 20:29:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:46.104 20:29:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:46.104 20:29:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:46.104 20:29:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:46.104 20:29:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:46.363 [2024-10-01 20:29:41.439191] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:46.363 20:29:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:46.622 20:29:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:30:46.622 20:29:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:46.880 20:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:30:46.880 20:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:47.447 20:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:30:47.447 20:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:47.705 20:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:30:47.705 20:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:30:47.963 20:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:48.222 20:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:30:48.222 20:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:48.481 20:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:30:48.481 20:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:48.740 20:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:30:48.740 20:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:30:49.307 20:29:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:49.307 20:29:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:30:49.307 20:29:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:49.890 20:29:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:30:49.890 20:29:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:49.890 20:29:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:50.149 [2024-10-01 20:29:45.366766] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:50.149 20:29:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:30:50.407 20:29:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:30:50.973 20:29:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid=e1b29161-2f6e-4e88-9903-c5274a848703 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:30:50.973 20:29:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:30:50.973 20:29:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:30:50.973 20:29:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:30:50.973 20:29:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:30:50.973 20:29:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:30:50.973 20:29:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:30:52.873 20:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:30:52.873 20:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:30:52.873 20:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:30:52.873 20:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:30:52.873 20:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:30:52.873 20:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:30:52.873 20:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:30:53.131 [global] 00:30:53.131 thread=1 00:30:53.131 invalidate=1 00:30:53.131 rw=write 00:30:53.131 time_based=1 00:30:53.131 runtime=1 00:30:53.131 ioengine=libaio 00:30:53.131 direct=1 00:30:53.131 bs=4096 00:30:53.131 iodepth=1 00:30:53.131 norandommap=0 00:30:53.131 numjobs=1 00:30:53.131 00:30:53.131 verify_dump=1 00:30:53.131 verify_backlog=512 00:30:53.131 verify_state_save=0 00:30:53.131 do_verify=1 00:30:53.131 verify=crc32c-intel 00:30:53.131 [job0] 00:30:53.131 filename=/dev/nvme0n1 00:30:53.131 [job1] 00:30:53.131 filename=/dev/nvme0n2 00:30:53.131 [job2] 00:30:53.131 filename=/dev/nvme0n3 00:30:53.131 [job3] 00:30:53.131 filename=/dev/nvme0n4 00:30:53.131 Could not set queue depth (nvme0n1) 00:30:53.131 Could not set queue depth (nvme0n2) 00:30:53.131 Could not set queue depth (nvme0n3) 00:30:53.131 Could not set queue depth (nvme0n4) 00:30:53.131 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:53.131 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:53.131 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:53.131 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:53.131 fio-3.35 00:30:53.131 Starting 4 threads 00:30:54.536 00:30:54.537 job0: (groupid=0, jobs=1): err= 0: pid=70022: Tue Oct 1 20:29:49 2024 00:30:54.537 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:30:54.537 slat (nsec): min=11783, max=87811, avg=17680.21, stdev=4831.24 00:30:54.537 clat (usec): min=92, max=2218, avg=238.38, stdev=70.86 00:30:54.537 lat (usec): min=159, max=2235, avg=256.06, stdev=70.61 00:30:54.537 clat percentiles (usec): 00:30:54.537 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 172], 00:30:54.537 | 30.00th=[ 182], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:30:54.537 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 293], 00:30:54.537 | 99.00th=[ 318], 99.50th=[ 347], 99.90th=[ 750], 99.95th=[ 848], 00:30:54.537 | 99.99th=[ 2212] 00:30:54.537 write: IOPS=2153, BW=8615KiB/s (8822kB/s)(8624KiB/1001msec); 0 zone resets 00:30:54.537 slat (nsec): min=19125, max=93966, avg=27527.65, stdev=4729.75 00:30:54.537 clat (usec): min=110, max=264, avg=188.99, stdev=36.64 00:30:54.537 lat (usec): min=133, max=337, avg=216.52, stdev=35.53 00:30:54.537 clat percentiles (usec): 00:30:54.537 | 1.00th=[ 118], 5.00th=[ 125], 10.00th=[ 129], 20.00th=[ 139], 00:30:54.537 | 30.00th=[ 196], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 208], 00:30:54.537 | 70.00th=[ 212], 80.00th=[ 217], 90.00th=[ 223], 95.00th=[ 227], 00:30:54.537 | 99.00th=[ 239], 99.50th=[ 249], 99.90th=[ 262], 99.95th=[ 265], 00:30:54.537 | 99.99th=[ 265] 00:30:54.537 bw ( KiB/s): min= 8192, max= 8192, per=24.70%, avg=8192.00, stdev= 0.00, samples=1 00:30:54.537 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:30:54.537 lat (usec) : 100=0.02%, 250=69.31%, 500=30.59%, 1000=0.05% 00:30:54.537 lat (msec) : 4=0.02% 00:30:54.537 cpu : usr=1.80%, sys=7.70%, ctx=4208, majf=0, minf=15 00:30:54.537 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:54.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:54.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:54.537 issued rwts: total=2048,2156,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:54.537 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:54.537 job1: (groupid=0, jobs=1): err= 0: pid=70023: Tue Oct 1 20:29:49 2024 00:30:54.537 read: IOPS=1724, BW=6897KiB/s (7063kB/s)(6904KiB/1001msec) 00:30:54.537 slat (nsec): min=13148, max=88943, avg=19966.73, stdev=6565.76 00:30:54.537 clat (usec): min=137, max=783, avg=270.96, stdev=30.90 00:30:54.537 lat (usec): min=153, max=810, avg=290.92, stdev=31.61 00:30:54.537 clat percentiles (usec): 00:30:54.537 | 1.00th=[ 169], 5.00th=[ 239], 10.00th=[ 249], 20.00th=[ 260], 00:30:54.537 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:30:54.537 | 70.00th=[ 281], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 302], 00:30:54.537 | 99.00th=[ 338], 99.50th=[ 367], 99.90th=[ 701], 99.95th=[ 783], 00:30:54.537 | 99.99th=[ 783] 00:30:54.537 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:30:54.537 slat (usec): min=18, max=141, avg=25.46, stdev= 7.00 00:30:54.537 clat (usec): min=106, max=408, avg=213.75, stdev=22.12 00:30:54.537 lat (usec): min=131, max=501, avg=239.21, stdev=22.80 00:30:54.537 clat percentiles (usec): 00:30:54.537 | 1.00th=[ 124], 5.00th=[ 190], 10.00th=[ 202], 20.00th=[ 206], 00:30:54.537 | 30.00th=[ 210], 40.00th=[ 212], 50.00th=[ 215], 60.00th=[ 217], 00:30:54.537 | 70.00th=[ 221], 80.00th=[ 225], 90.00th=[ 235], 95.00th=[ 243], 00:30:54.537 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[ 289], 99.95th=[ 297], 00:30:54.537 | 99.99th=[ 408] 00:30:54.537 bw ( KiB/s): min= 8192, max= 8192, per=24.70%, avg=8192.00, stdev= 0.00, samples=1 00:30:54.537 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:30:54.537 lat (usec) : 250=57.47%, 500=42.45%, 750=0.05%, 1000=0.03% 00:30:54.537 cpu : usr=2.10%, sys=6.30%, ctx=3782, majf=0, minf=7 00:30:54.537 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:54.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:54.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:54.537 issued rwts: total=1726,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:54.537 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:54.537 job2: (groupid=0, jobs=1): err= 0: pid=70024: Tue Oct 1 20:29:49 2024 00:30:54.537 read: IOPS=2021, BW=8088KiB/s (8282kB/s)(8096KiB/1001msec) 00:30:54.537 slat (usec): min=11, max=140, avg=20.17, stdev= 7.37 00:30:54.537 clat (usec): min=154, max=3035, avg=244.91, stdev=91.53 00:30:54.537 lat (usec): min=174, max=3084, avg=265.08, stdev=90.63 00:30:54.537 clat percentiles (usec): 00:30:54.537 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 182], 00:30:54.537 | 30.00th=[ 194], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:30:54.537 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 297], 00:30:54.537 | 99.00th=[ 334], 99.50th=[ 379], 99.90th=[ 742], 99.95th=[ 2245], 00:30:54.537 | 99.99th=[ 3032] 00:30:54.537 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:30:54.537 slat (nsec): min=20585, max=73201, avg=27430.00, stdev=5405.21 00:30:54.537 clat (usec): min=121, max=264, avg=194.45, stdev=31.35 00:30:54.537 lat (usec): min=145, max=300, avg=221.88, stdev=30.29 00:30:54.537 clat percentiles (usec): 00:30:54.537 | 1.00th=[ 128], 5.00th=[ 133], 10.00th=[ 139], 20.00th=[ 153], 00:30:54.537 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 206], 60.00th=[ 210], 00:30:54.537 | 70.00th=[ 212], 80.00th=[ 217], 90.00th=[ 223], 95.00th=[ 227], 00:30:54.537 | 99.00th=[ 239], 99.50th=[ 243], 99.90th=[ 253], 99.95th=[ 255], 00:30:54.537 | 99.99th=[ 265] 00:30:54.537 bw ( KiB/s): min= 8208, max= 8208, per=24.75%, avg=8208.00, stdev= 0.00, samples=1 00:30:54.537 iops : min= 2052, max= 2052, avg=2052.00, stdev= 0.00, samples=1 00:30:54.537 lat (usec) : 250=68.32%, 500=31.58%, 750=0.05% 00:30:54.537 lat (msec) : 4=0.05% 00:30:54.537 cpu : usr=1.40%, sys=8.20%, ctx=4074, majf=0, minf=7 00:30:54.537 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:54.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:54.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:54.537 issued rwts: total=2024,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:54.537 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:54.537 job3: (groupid=0, jobs=1): err= 0: pid=70025: Tue Oct 1 20:29:49 2024 00:30:54.537 read: IOPS=1611, BW=6446KiB/s (6600kB/s)(6452KiB/1001msec) 00:30:54.537 slat (usec): min=15, max=170, avg=20.82, stdev= 6.01 00:30:54.537 clat (usec): min=170, max=1123, avg=283.16, stdev=39.54 00:30:54.537 lat (usec): min=189, max=1141, avg=303.98, stdev=40.88 00:30:54.537 clat percentiles (usec): 00:30:54.537 | 1.00th=[ 249], 5.00th=[ 258], 10.00th=[ 262], 20.00th=[ 265], 00:30:54.537 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 277], 00:30:54.537 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 302], 95.00th=[ 367], 00:30:54.537 | 99.00th=[ 420], 99.50th=[ 441], 99.90th=[ 594], 99.95th=[ 1123], 00:30:54.537 | 99.99th=[ 1123] 00:30:54.537 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:30:54.537 slat (nsec): min=20365, max=93780, avg=28676.35, stdev=6103.89 00:30:54.537 clat (usec): min=132, max=1774, avg=216.18, stdev=42.23 00:30:54.537 lat (usec): min=155, max=1814, avg=244.85, stdev=42.75 00:30:54.537 clat percentiles (usec): 00:30:54.537 | 1.00th=[ 184], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 204], 00:30:54.537 | 30.00th=[ 206], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 215], 00:30:54.537 | 70.00th=[ 219], 80.00th=[ 225], 90.00th=[ 235], 95.00th=[ 247], 00:30:54.537 | 99.00th=[ 269], 99.50th=[ 306], 99.90th=[ 578], 99.95th=[ 824], 00:30:54.537 | 99.99th=[ 1778] 00:30:54.537 bw ( KiB/s): min= 8192, max= 8192, per=24.70%, avg=8192.00, stdev= 0.00, samples=1 00:30:54.537 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:30:54.537 lat (usec) : 250=53.89%, 500=45.92%, 750=0.11%, 1000=0.03% 00:30:54.537 lat (msec) : 2=0.05% 00:30:54.537 cpu : usr=1.10%, sys=7.70%, ctx=3663, majf=0, minf=9 00:30:54.537 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:54.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:54.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:54.537 issued rwts: total=1613,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:54.537 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:54.537 00:30:54.537 Run status group 0 (all jobs): 00:30:54.537 READ: bw=28.9MiB/s (30.3MB/s), 6446KiB/s-8184KiB/s (6600kB/s-8380kB/s), io=28.9MiB (30.4MB), run=1001-1001msec 00:30:54.537 WRITE: bw=32.4MiB/s (34.0MB/s), 8184KiB/s-8615KiB/s (8380kB/s-8822kB/s), io=32.4MiB (34.0MB), run=1001-1001msec 00:30:54.537 00:30:54.537 Disk stats (read/write): 00:30:54.537 nvme0n1: ios=1586/1906, merge=0/0, ticks=438/394, in_queue=832, util=87.78% 00:30:54.537 nvme0n2: ios=1575/1706, merge=0/0, ticks=445/366, in_queue=811, util=88.45% 00:30:54.537 nvme0n3: ios=1536/1877, merge=0/0, ticks=404/386, in_queue=790, util=89.03% 00:30:54.537 nvme0n4: ios=1536/1601, merge=0/0, ticks=444/360, in_queue=804, util=89.78% 00:30:54.537 20:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:30:54.537 [global] 00:30:54.537 thread=1 00:30:54.537 invalidate=1 00:30:54.537 rw=randwrite 00:30:54.537 time_based=1 00:30:54.537 runtime=1 00:30:54.537 ioengine=libaio 00:30:54.537 direct=1 00:30:54.537 bs=4096 00:30:54.537 iodepth=1 00:30:54.537 norandommap=0 00:30:54.537 numjobs=1 00:30:54.537 00:30:54.537 verify_dump=1 00:30:54.537 verify_backlog=512 00:30:54.537 verify_state_save=0 00:30:54.537 do_verify=1 00:30:54.537 verify=crc32c-intel 00:30:54.537 [job0] 00:30:54.537 filename=/dev/nvme0n1 00:30:54.537 [job1] 00:30:54.537 filename=/dev/nvme0n2 00:30:54.537 [job2] 00:30:54.537 filename=/dev/nvme0n3 00:30:54.537 [job3] 00:30:54.537 filename=/dev/nvme0n4 00:30:54.537 Could not set queue depth (nvme0n1) 00:30:54.537 Could not set queue depth (nvme0n2) 00:30:54.537 Could not set queue depth (nvme0n3) 00:30:54.537 Could not set queue depth (nvme0n4) 00:30:54.537 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:54.538 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:54.538 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:54.538 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:54.538 fio-3.35 00:30:54.538 Starting 4 threads 00:30:55.908 00:30:55.908 job0: (groupid=0, jobs=1): err= 0: pid=70079: Tue Oct 1 20:29:50 2024 00:30:55.908 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:30:55.908 slat (nsec): min=13586, max=52624, avg=19474.85, stdev=5662.22 00:30:55.908 clat (usec): min=140, max=339, avg=172.34, stdev=19.08 00:30:55.908 lat (usec): min=156, max=368, avg=191.82, stdev=20.23 00:30:55.908 clat percentiles (usec): 00:30:55.908 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:30:55.908 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 172], 00:30:55.908 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 192], 95.00th=[ 210], 00:30:55.908 | 99.00th=[ 251], 99.50th=[ 265], 99.90th=[ 306], 99.95th=[ 334], 00:30:55.908 | 99.99th=[ 338] 00:30:55.908 write: IOPS=3003, BW=11.7MiB/s (12.3MB/s)(11.7MiB/1001msec); 0 zone resets 00:30:55.908 slat (usec): min=19, max=105, avg=28.05, stdev= 8.07 00:30:55.908 clat (usec): min=102, max=718, avg=137.12, stdev=21.75 00:30:55.908 lat (usec): min=124, max=742, avg=165.16, stdev=23.33 00:30:55.908 clat percentiles (usec): 00:30:55.908 | 1.00th=[ 113], 5.00th=[ 118], 10.00th=[ 121], 20.00th=[ 125], 00:30:55.908 | 30.00th=[ 128], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 137], 00:30:55.908 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 159], 95.00th=[ 172], 00:30:55.908 | 99.00th=[ 198], 99.50th=[ 210], 99.90th=[ 318], 99.95th=[ 502], 00:30:55.908 | 99.99th=[ 717] 00:30:55.908 bw ( KiB/s): min=12288, max=12288, per=27.90%, avg=12288.00, stdev= 0.00, samples=1 00:30:55.908 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:30:55.908 lat (usec) : 250=99.44%, 500=0.52%, 750=0.04% 00:30:55.908 cpu : usr=2.70%, sys=10.10%, ctx=5569, majf=0, minf=17 00:30:55.908 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:55.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.908 issued rwts: total=2560,3007,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.908 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:55.908 job1: (groupid=0, jobs=1): err= 0: pid=70080: Tue Oct 1 20:29:50 2024 00:30:55.908 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:30:55.908 slat (nsec): min=13754, max=52104, avg=18342.24, stdev=4727.25 00:30:55.908 clat (usec): min=145, max=6074, avg=177.44, stdev=172.22 00:30:55.908 lat (usec): min=160, max=6120, avg=195.78, stdev=173.33 00:30:55.908 clat percentiles (usec): 00:30:55.908 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:30:55.908 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 172], 00:30:55.908 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 186], 95.00th=[ 194], 00:30:55.908 | 99.00th=[ 258], 99.50th=[ 293], 99.90th=[ 2999], 99.95th=[ 5211], 00:30:55.908 | 99.99th=[ 6063] 00:30:55.908 write: IOPS=2912, BW=11.4MiB/s (11.9MB/s)(11.4MiB/1001msec); 0 zone resets 00:30:55.908 slat (nsec): min=19377, max=80024, avg=26072.66, stdev=6997.64 00:30:55.908 clat (usec): min=106, max=1634, avg=141.09, stdev=35.18 00:30:55.908 lat (usec): min=127, max=1655, avg=167.16, stdev=37.13 00:30:55.908 clat percentiles (usec): 00:30:55.908 | 1.00th=[ 114], 5.00th=[ 119], 10.00th=[ 122], 20.00th=[ 126], 00:30:55.908 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 139], 00:30:55.908 | 70.00th=[ 143], 80.00th=[ 151], 90.00th=[ 174], 95.00th=[ 190], 00:30:55.908 | 99.00th=[ 212], 99.50th=[ 219], 99.90th=[ 289], 99.95th=[ 441], 00:30:55.908 | 99.99th=[ 1631] 00:30:55.908 bw ( KiB/s): min=12288, max=12288, per=27.90%, avg=12288.00, stdev= 0.00, samples=1 00:30:55.908 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:30:55.908 lat (usec) : 250=99.36%, 500=0.53% 00:30:55.908 lat (msec) : 2=0.05%, 4=0.02%, 10=0.04% 00:30:55.908 cpu : usr=2.20%, sys=9.60%, ctx=5475, majf=0, minf=12 00:30:55.908 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:55.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.908 issued rwts: total=2560,2915,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.908 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:55.908 job2: (groupid=0, jobs=1): err= 0: pid=70081: Tue Oct 1 20:29:50 2024 00:30:55.908 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:30:55.908 slat (nsec): min=11727, max=52453, avg=17580.42, stdev=6625.86 00:30:55.908 clat (usec): min=147, max=477, avg=213.71, stdev=50.71 00:30:55.908 lat (usec): min=162, max=489, avg=231.30, stdev=50.91 00:30:55.908 clat percentiles (usec): 00:30:55.908 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 169], 00:30:55.908 | 30.00th=[ 174], 40.00th=[ 180], 50.00th=[ 196], 60.00th=[ 215], 00:30:55.908 | 70.00th=[ 233], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 297], 00:30:55.908 | 99.00th=[ 314], 99.50th=[ 396], 99.90th=[ 453], 99.95th=[ 465], 00:30:55.908 | 99.99th=[ 478] 00:30:55.908 write: IOPS=2548, BW=9.95MiB/s (10.4MB/s)(9.96MiB/1001msec); 0 zone resets 00:30:55.908 slat (usec): min=15, max=107, avg=24.41, stdev= 7.35 00:30:55.908 clat (usec): min=113, max=4818, avg=178.63, stdev=163.06 00:30:55.908 lat (usec): min=135, max=4850, avg=203.04, stdev=163.57 00:30:55.908 clat percentiles (usec): 00:30:55.908 | 1.00th=[ 123], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 137], 00:30:55.908 | 30.00th=[ 141], 40.00th=[ 147], 50.00th=[ 155], 60.00th=[ 163], 00:30:55.908 | 70.00th=[ 178], 80.00th=[ 219], 90.00th=[ 239], 95.00th=[ 251], 00:30:55.908 | 99.00th=[ 334], 99.50th=[ 453], 99.90th=[ 3589], 99.95th=[ 4817], 00:30:55.908 | 99.99th=[ 4817] 00:30:55.908 bw ( KiB/s): min=12288, max=12288, per=27.90%, avg=12288.00, stdev= 0.00, samples=1 00:30:55.908 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:30:55.908 lat (usec) : 250=85.61%, 500=14.13%, 750=0.11%, 1000=0.04% 00:30:55.908 lat (msec) : 2=0.02%, 4=0.04%, 10=0.04% 00:30:55.908 cpu : usr=1.90%, sys=7.80%, ctx=4603, majf=0, minf=7 00:30:55.908 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:55.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.909 issued rwts: total=2048,2551,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.909 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:55.909 job3: (groupid=0, jobs=1): err= 0: pid=70082: Tue Oct 1 20:29:50 2024 00:30:55.909 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:30:55.909 slat (usec): min=13, max=119, avg=18.58, stdev= 6.74 00:30:55.909 clat (usec): min=139, max=2282, avg=217.35, stdev=71.33 00:30:55.909 lat (usec): min=164, max=2308, avg=235.93, stdev=70.45 00:30:55.909 clat percentiles (usec): 00:30:55.909 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 167], 00:30:55.909 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 184], 60.00th=[ 241], 00:30:55.909 | 70.00th=[ 265], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 297], 00:30:55.909 | 99.00th=[ 318], 99.50th=[ 392], 99.90th=[ 449], 99.95th=[ 469], 00:30:55.909 | 99.99th=[ 2278] 00:30:55.909 write: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(9.96MiB/1001msec); 0 zone resets 00:30:55.909 slat (usec): min=14, max=103, avg=28.72, stdev=10.22 00:30:55.909 clat (usec): min=114, max=496, avg=169.96, stdev=44.61 00:30:55.909 lat (usec): min=137, max=525, avg=198.68, stdev=43.73 00:30:55.909 clat percentiles (usec): 00:30:55.909 | 1.00th=[ 122], 5.00th=[ 129], 10.00th=[ 133], 20.00th=[ 137], 00:30:55.909 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 151], 60.00th=[ 159], 00:30:55.909 | 70.00th=[ 180], 80.00th=[ 219], 90.00th=[ 239], 95.00th=[ 249], 00:30:55.909 | 99.00th=[ 302], 99.50th=[ 334], 99.90th=[ 469], 99.95th=[ 486], 00:30:55.909 | 99.99th=[ 498] 00:30:55.909 bw ( KiB/s): min=12288, max=12288, per=27.90%, avg=12288.00, stdev= 0.00, samples=1 00:30:55.909 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:30:55.909 lat (usec) : 250=81.12%, 500=18.86% 00:30:55.909 lat (msec) : 4=0.02% 00:30:55.909 cpu : usr=2.10%, sys=8.60%, ctx=4605, majf=0, minf=13 00:30:55.909 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:55.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.909 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.909 issued rwts: total=2048,2550,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.909 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:55.909 00:30:55.909 Run status group 0 (all jobs): 00:30:55.909 READ: bw=36.0MiB/s (37.7MB/s), 8184KiB/s-9.99MiB/s (8380kB/s-10.5MB/s), io=36.0MiB (37.7MB), run=1001-1001msec 00:30:55.909 WRITE: bw=43.0MiB/s (45.1MB/s), 9.95MiB/s-11.7MiB/s (10.4MB/s-12.3MB/s), io=43.1MiB (45.1MB), run=1001-1001msec 00:30:55.909 00:30:55.909 Disk stats (read/write): 00:30:55.909 nvme0n1: ios=2269/2560, merge=0/0, ticks=413/393, in_queue=806, util=88.34% 00:30:55.909 nvme0n2: ios=2155/2560, merge=0/0, ticks=412/388, in_queue=800, util=88.24% 00:30:55.909 nvme0n3: ios=2027/2048, merge=0/0, ticks=435/339, in_queue=774, util=88.72% 00:30:55.909 nvme0n4: ios=2027/2048, merge=0/0, ticks=444/332, in_queue=776, util=89.69% 00:30:55.909 20:29:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:30:55.909 [global] 00:30:55.909 thread=1 00:30:55.909 invalidate=1 00:30:55.909 rw=write 00:30:55.909 time_based=1 00:30:55.909 runtime=1 00:30:55.909 ioengine=libaio 00:30:55.909 direct=1 00:30:55.909 bs=4096 00:30:55.909 iodepth=128 00:30:55.909 norandommap=0 00:30:55.909 numjobs=1 00:30:55.909 00:30:55.909 verify_dump=1 00:30:55.909 verify_backlog=512 00:30:55.909 verify_state_save=0 00:30:55.909 do_verify=1 00:30:55.909 verify=crc32c-intel 00:30:55.909 [job0] 00:30:55.909 filename=/dev/nvme0n1 00:30:55.909 [job1] 00:30:55.909 filename=/dev/nvme0n2 00:30:55.909 [job2] 00:30:55.909 filename=/dev/nvme0n3 00:30:55.909 [job3] 00:30:55.909 filename=/dev/nvme0n4 00:30:55.909 Could not set queue depth (nvme0n1) 00:30:55.909 Could not set queue depth (nvme0n2) 00:30:55.909 Could not set queue depth (nvme0n3) 00:30:55.909 Could not set queue depth (nvme0n4) 00:30:55.909 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:55.909 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:55.909 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:55.909 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:55.909 fio-3.35 00:30:55.909 Starting 4 threads 00:30:57.279 00:30:57.279 job0: (groupid=0, jobs=1): err= 0: pid=70137: Tue Oct 1 20:29:52 2024 00:30:57.279 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:30:57.279 slat (usec): min=6, max=14089, avg=124.21, stdev=714.99 00:30:57.279 clat (usec): min=8398, max=47913, avg=16108.39, stdev=9104.14 00:30:57.279 lat (usec): min=8417, max=47925, avg=16232.59, stdev=9163.82 00:30:57.279 clat percentiles (usec): 00:30:57.279 | 1.00th=[ 8979], 5.00th=[10159], 10.00th=[11207], 20.00th=[11469], 00:30:57.279 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11863], 60.00th=[11994], 00:30:57.279 | 70.00th=[12780], 80.00th=[21890], 90.00th=[30540], 95.00th=[41681], 00:30:57.279 | 99.00th=[43254], 99.50th=[47973], 99.90th=[47973], 99.95th=[47973], 00:30:57.279 | 99.99th=[47973] 00:30:57.279 write: IOPS=4242, BW=16.6MiB/s (17.4MB/s)(16.6MiB/1001msec); 0 zone resets 00:30:57.279 slat (usec): min=8, max=10674, avg=108.00, stdev=560.34 00:30:57.279 clat (usec): min=833, max=34484, avg=13976.53, stdev=5833.64 00:30:57.279 lat (usec): min=850, max=34531, avg=14084.54, stdev=5858.44 00:30:57.279 clat percentiles (usec): 00:30:57.279 | 1.00th=[ 5145], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[10683], 00:30:57.279 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11600], 60.00th=[11731], 00:30:57.279 | 70.00th=[12125], 80.00th=[19530], 90.00th=[24511], 95.00th=[25297], 00:30:57.279 | 99.00th=[31589], 99.50th=[33817], 99.90th=[34341], 99.95th=[34341], 00:30:57.279 | 99.99th=[34341] 00:30:57.279 bw ( KiB/s): min=12312, max=20672, per=28.52%, avg=16492.00, stdev=5911.41, samples=2 00:30:57.279 iops : min= 3078, max= 5168, avg=4123.00, stdev=1477.85, samples=2 00:30:57.279 lat (usec) : 1000=0.11% 00:30:57.279 lat (msec) : 2=0.01%, 10=8.27%, 20=72.08%, 50=19.53% 00:30:57.279 cpu : usr=4.50%, sys=10.40%, ctx=356, majf=0, minf=15 00:30:57.279 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:30:57.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:57.279 issued rwts: total=4096,4247,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:57.279 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:57.279 job1: (groupid=0, jobs=1): err= 0: pid=70138: Tue Oct 1 20:29:52 2024 00:30:57.279 read: IOPS=4124, BW=16.1MiB/s (16.9MB/s)(16.1MiB/1002msec) 00:30:57.279 slat (usec): min=6, max=7483, avg=114.34, stdev=569.67 00:30:57.279 clat (usec): min=520, max=41499, avg=14724.99, stdev=6670.55 00:30:57.279 lat (usec): min=2501, max=41515, avg=14839.33, stdev=6708.84 00:30:57.279 clat percentiles (usec): 00:30:57.279 | 1.00th=[ 8979], 5.00th=[10028], 10.00th=[11076], 20.00th=[11469], 00:30:57.279 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11731], 60.00th=[11863], 00:30:57.279 | 70.00th=[12387], 80.00th=[17695], 90.00th=[26608], 95.00th=[30802], 00:30:57.279 | 99.00th=[37487], 99.50th=[38011], 99.90th=[38011], 99.95th=[39060], 00:30:57.279 | 99.99th=[41681] 00:30:57.279 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:30:57.279 slat (usec): min=10, max=7448, avg=107.35, stdev=546.29 00:30:57.279 clat (usec): min=5455, max=31104, avg=14119.33, stdev=5733.78 00:30:57.279 lat (usec): min=5511, max=31127, avg=14226.68, stdev=5758.04 00:30:57.279 clat percentiles (usec): 00:30:57.279 | 1.00th=[ 8979], 5.00th=[ 9503], 10.00th=[ 9634], 20.00th=[10028], 00:30:57.279 | 30.00th=[10421], 40.00th=[11469], 50.00th=[11994], 60.00th=[12387], 00:30:57.279 | 70.00th=[12911], 80.00th=[17957], 90.00th=[24249], 95.00th=[26346], 00:30:57.279 | 99.00th=[30016], 99.50th=[30802], 99.90th=[31065], 99.95th=[31065], 00:30:57.279 | 99.99th=[31065] 00:30:57.279 bw ( KiB/s): min=12312, max=23856, per=31.27%, avg=18084.00, stdev=8162.84, samples=2 00:30:57.279 iops : min= 3078, max= 5964, avg=4521.00, stdev=2040.71, samples=2 00:30:57.279 lat (usec) : 750=0.01% 00:30:57.279 lat (msec) : 4=0.37%, 10=11.26%, 20=70.72%, 50=17.64% 00:30:57.279 cpu : usr=4.40%, sys=11.09%, ctx=415, majf=0, minf=13 00:30:57.279 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:30:57.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:57.279 issued rwts: total=4133,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:57.279 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:57.279 job2: (groupid=0, jobs=1): err= 0: pid=70139: Tue Oct 1 20:29:52 2024 00:30:57.279 read: IOPS=2337, BW=9349KiB/s (9574kB/s)(9368KiB/1002msec) 00:30:57.279 slat (usec): min=4, max=9466, avg=205.10, stdev=1022.26 00:30:57.279 clat (usec): min=1075, max=37530, avg=25535.66, stdev=5356.11 00:30:57.279 lat (usec): min=6746, max=38348, avg=25740.76, stdev=5316.91 00:30:57.279 clat percentiles (usec): 00:30:57.279 | 1.00th=[ 7242], 5.00th=[18744], 10.00th=[20841], 20.00th=[21890], 00:30:57.279 | 30.00th=[22938], 40.00th=[23725], 50.00th=[24773], 60.00th=[25297], 00:30:57.279 | 70.00th=[27395], 80.00th=[31065], 90.00th=[33817], 95.00th=[34341], 00:30:57.279 | 99.00th=[37487], 99.50th=[37487], 99.90th=[37487], 99.95th=[37487], 00:30:57.279 | 99.99th=[37487] 00:30:57.279 write: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec); 0 zone resets 00:30:57.279 slat (usec): min=9, max=5585, avg=195.79, stdev=734.13 00:30:57.279 clat (usec): min=11047, max=52565, avg=26016.95, stdev=9001.19 00:30:57.279 lat (usec): min=13676, max=52600, avg=26212.74, stdev=9045.36 00:30:57.279 clat percentiles (usec): 00:30:57.279 | 1.00th=[14877], 5.00th=[15795], 10.00th=[17695], 20.00th=[18744], 00:30:57.279 | 30.00th=[19530], 40.00th=[20841], 50.00th=[22152], 60.00th=[25297], 00:30:57.279 | 70.00th=[28967], 80.00th=[32900], 90.00th=[41157], 95.00th=[45351], 00:30:57.279 | 99.00th=[50594], 99.50th=[52167], 99.90th=[52691], 99.95th=[52691], 00:30:57.279 | 99.99th=[52691] 00:30:57.279 bw ( KiB/s): min= 8192, max=12288, per=17.71%, avg=10240.00, stdev=2896.31, samples=2 00:30:57.279 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:30:57.279 lat (msec) : 2=0.02%, 10=0.65%, 20=18.16%, 50=80.31%, 100=0.86% 00:30:57.279 cpu : usr=2.20%, sys=6.79%, ctx=272, majf=0, minf=6 00:30:57.279 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:30:57.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:57.279 issued rwts: total=2342,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:57.279 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:57.279 job3: (groupid=0, jobs=1): err= 0: pid=70140: Tue Oct 1 20:29:52 2024 00:30:57.279 read: IOPS=2875, BW=11.2MiB/s (11.8MB/s)(11.3MiB/1002msec) 00:30:57.279 slat (usec): min=4, max=8707, avg=174.13, stdev=839.04 00:30:57.279 clat (usec): min=1143, max=33051, avg=21937.95, stdev=3981.70 00:30:57.279 lat (usec): min=7063, max=33064, avg=22112.08, stdev=3941.57 00:30:57.279 clat percentiles (usec): 00:30:57.279 | 1.00th=[ 7635], 5.00th=[17433], 10.00th=[18220], 20.00th=[19006], 00:30:57.279 | 30.00th=[19792], 40.00th=[20317], 50.00th=[21365], 60.00th=[22414], 00:30:57.279 | 70.00th=[23462], 80.00th=[24773], 90.00th=[26870], 95.00th=[30278], 00:30:57.279 | 99.00th=[32900], 99.50th=[32900], 99.90th=[33162], 99.95th=[33162], 00:30:57.279 | 99.99th=[33162] 00:30:57.279 write: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec); 0 zone resets 00:30:57.279 slat (usec): min=11, max=5619, avg=154.39, stdev=650.52 00:30:57.279 clat (usec): min=12284, max=38454, avg=20508.68, stdev=6348.76 00:30:57.279 lat (usec): min=13361, max=38485, avg=20663.08, stdev=6369.86 00:30:57.279 clat percentiles (usec): 00:30:57.279 | 1.00th=[13566], 5.00th=[14091], 10.00th=[15401], 20.00th=[16057], 00:30:57.279 | 30.00th=[16450], 40.00th=[17433], 50.00th=[18482], 60.00th=[20055], 00:30:57.279 | 70.00th=[20841], 80.00th=[21890], 90.00th=[32113], 95.00th=[36963], 00:30:57.279 | 99.00th=[38011], 99.50th=[38011], 99.90th=[38536], 99.95th=[38536], 00:30:57.279 | 99.99th=[38536] 00:30:57.279 bw ( KiB/s): min=12264, max=12336, per=21.27%, avg=12300.00, stdev=50.91, samples=2 00:30:57.279 iops : min= 3066, max= 3084, avg=3075.00, stdev=12.73, samples=2 00:30:57.279 lat (msec) : 2=0.02%, 10=0.54%, 20=47.84%, 50=51.60% 00:30:57.279 cpu : usr=2.70%, sys=8.99%, ctx=314, majf=0, minf=9 00:30:57.279 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:30:57.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:57.279 issued rwts: total=2881,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:57.279 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:57.279 00:30:57.279 Run status group 0 (all jobs): 00:30:57.279 READ: bw=52.4MiB/s (55.0MB/s), 9349KiB/s-16.1MiB/s (9574kB/s-16.9MB/s), io=52.5MiB (55.1MB), run=1001-1002msec 00:30:57.280 WRITE: bw=56.5MiB/s (59.2MB/s), 9.98MiB/s-18.0MiB/s (10.5MB/s-18.8MB/s), io=56.6MiB (59.3MB), run=1001-1002msec 00:30:57.280 00:30:57.280 Disk stats (read/write): 00:30:57.280 nvme0n1: ios=3122/3580, merge=0/0, ticks=14659/13359, in_queue=28018, util=86.97% 00:30:57.280 nvme0n2: ios=3441/3584, merge=0/0, ticks=12701/11847, in_queue=24548, util=86.85% 00:30:57.280 nvme0n3: ios=1960/2048, merge=0/0, ticks=12559/13285, in_queue=25844, util=88.83% 00:30:57.280 nvme0n4: ios=2560/2743, merge=0/0, ticks=13532/11239, in_queue=24771, util=89.57% 00:30:57.280 20:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:30:57.280 [global] 00:30:57.280 thread=1 00:30:57.280 invalidate=1 00:30:57.280 rw=randwrite 00:30:57.280 time_based=1 00:30:57.280 runtime=1 00:30:57.280 ioengine=libaio 00:30:57.280 direct=1 00:30:57.280 bs=4096 00:30:57.280 iodepth=128 00:30:57.280 norandommap=0 00:30:57.280 numjobs=1 00:30:57.280 00:30:57.280 verify_dump=1 00:30:57.280 verify_backlog=512 00:30:57.280 verify_state_save=0 00:30:57.280 do_verify=1 00:30:57.280 verify=crc32c-intel 00:30:57.280 [job0] 00:30:57.280 filename=/dev/nvme0n1 00:30:57.280 [job1] 00:30:57.280 filename=/dev/nvme0n2 00:30:57.280 [job2] 00:30:57.280 filename=/dev/nvme0n3 00:30:57.280 [job3] 00:30:57.280 filename=/dev/nvme0n4 00:30:57.280 Could not set queue depth (nvme0n1) 00:30:57.280 Could not set queue depth (nvme0n2) 00:30:57.280 Could not set queue depth (nvme0n3) 00:30:57.280 Could not set queue depth (nvme0n4) 00:30:57.280 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:57.280 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:57.280 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:57.280 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:57.280 fio-3.35 00:30:57.280 Starting 4 threads 00:30:58.655 00:30:58.655 job0: (groupid=0, jobs=1): err= 0: pid=70200: Tue Oct 1 20:29:53 2024 00:30:58.655 read: IOPS=5549, BW=21.7MiB/s (22.7MB/s)(21.7MiB/1002msec) 00:30:58.655 slat (usec): min=6, max=5304, avg=88.14, stdev=421.31 00:30:58.655 clat (usec): min=533, max=16329, avg=11397.88, stdev=1520.20 00:30:58.655 lat (usec): min=5226, max=17329, avg=11486.02, stdev=1547.58 00:30:58.655 clat percentiles (usec): 00:30:58.655 | 1.00th=[ 6194], 5.00th=[ 8586], 10.00th=[ 9896], 20.00th=[10552], 00:30:58.655 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[11469], 00:30:58.655 | 70.00th=[11994], 80.00th=[12518], 90.00th=[13304], 95.00th=[13829], 00:30:58.655 | 99.00th=[15270], 99.50th=[15664], 99.90th=[16188], 99.95th=[16188], 00:30:58.655 | 99.99th=[16319] 00:30:58.655 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:30:58.655 slat (usec): min=9, max=4568, avg=82.54, stdev=346.62 00:30:58.655 clat (usec): min=6344, max=16839, avg=11239.62, stdev=1241.21 00:30:58.655 lat (usec): min=6380, max=16883, avg=11322.16, stdev=1272.55 00:30:58.655 clat percentiles (usec): 00:30:58.655 | 1.00th=[ 7701], 5.00th=[ 9110], 10.00th=[ 9896], 20.00th=[10552], 00:30:58.655 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11338], 60.00th=[11469], 00:30:58.655 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12256], 95.00th=[13173], 00:30:58.655 | 99.00th=[15401], 99.50th=[15795], 99.90th=[15926], 99.95th=[15926], 00:30:58.655 | 99.99th=[16909] 00:30:58.655 bw ( KiB/s): min=22096, max=22960, per=34.33%, avg=22528.00, stdev=610.94, samples=2 00:30:58.655 iops : min= 5524, max= 5740, avg=5632.00, stdev=152.74, samples=2 00:30:58.655 lat (usec) : 750=0.01% 00:30:58.655 lat (msec) : 10=10.58%, 20=89.41% 00:30:58.655 cpu : usr=5.59%, sys=15.38%, ctx=724, majf=0, minf=13 00:30:58.655 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:30:58.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:58.655 issued rwts: total=5561,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.655 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:58.655 job1: (groupid=0, jobs=1): err= 0: pid=70201: Tue Oct 1 20:29:53 2024 00:30:58.655 read: IOPS=2615, BW=10.2MiB/s (10.7MB/s)(10.3MiB/1007msec) 00:30:58.655 slat (usec): min=3, max=8750, avg=185.40, stdev=781.68 00:30:58.655 clat (usec): min=2650, max=36086, avg=22266.66, stdev=5296.47 00:30:58.655 lat (usec): min=8670, max=36099, avg=22452.06, stdev=5357.24 00:30:58.655 clat percentiles (usec): 00:30:58.655 | 1.00th=[ 9765], 5.00th=[13042], 10.00th=[13960], 20.00th=[15533], 00:30:58.655 | 30.00th=[21627], 40.00th=[23462], 50.00th=[23987], 60.00th=[24249], 00:30:58.655 | 70.00th=[24511], 80.00th=[25035], 90.00th=[28705], 95.00th=[30278], 00:30:58.655 | 99.00th=[32113], 99.50th=[32900], 99.90th=[34341], 99.95th=[35390], 00:30:58.655 | 99.99th=[35914] 00:30:58.655 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:30:58.655 slat (usec): min=8, max=10347, avg=160.61, stdev=500.30 00:30:58.655 clat (usec): min=8434, max=34303, avg=22351.65, stdev=5926.95 00:30:58.655 lat (usec): min=8459, max=34519, avg=22512.26, stdev=5977.81 00:30:58.655 clat percentiles (usec): 00:30:58.655 | 1.00th=[10028], 5.00th=[12387], 10.00th=[13042], 20.00th=[14091], 00:30:58.655 | 30.00th=[21365], 40.00th=[24249], 50.00th=[24773], 60.00th=[25035], 00:30:58.655 | 70.00th=[25297], 80.00th=[25822], 90.00th=[28705], 95.00th=[30802], 00:30:58.655 | 99.00th=[32375], 99.50th=[32900], 99.90th=[33162], 99.95th=[33817], 00:30:58.655 | 99.99th=[34341] 00:30:58.655 bw ( KiB/s): min=11856, max=12288, per=18.40%, avg=12072.00, stdev=305.47, samples=2 00:30:58.655 iops : min= 2964, max= 3072, avg=3018.00, stdev=76.37, samples=2 00:30:58.655 lat (msec) : 4=0.02%, 10=1.17%, 20=25.50%, 50=73.31% 00:30:58.655 cpu : usr=3.28%, sys=7.26%, ctx=941, majf=0, minf=17 00:30:58.655 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:30:58.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:58.655 issued rwts: total=2634,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.655 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:58.655 job2: (groupid=0, jobs=1): err= 0: pid=70202: Tue Oct 1 20:29:53 2024 00:30:58.655 read: IOPS=2586, BW=10.1MiB/s (10.6MB/s)(10.2MiB/1006msec) 00:30:58.655 slat (usec): min=3, max=6403, avg=185.57, stdev=740.98 00:30:58.655 clat (usec): min=983, max=35707, avg=22421.99, stdev=4876.14 00:30:58.655 lat (usec): min=6917, max=35728, avg=22607.55, stdev=4928.48 00:30:58.656 clat percentiles (usec): 00:30:58.656 | 1.00th=[10814], 5.00th=[12780], 10.00th=[14222], 20.00th=[17957], 00:30:58.656 | 30.00th=[22152], 40.00th=[23725], 50.00th=[23987], 60.00th=[24249], 00:30:58.656 | 70.00th=[24511], 80.00th=[25035], 90.00th=[27395], 95.00th=[28967], 00:30:58.656 | 99.00th=[31589], 99.50th=[32900], 99.90th=[33817], 99.95th=[33817], 00:30:58.656 | 99.99th=[35914] 00:30:58.656 write: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec); 0 zone resets 00:30:58.656 slat (usec): min=3, max=5110, avg=162.67, stdev=459.90 00:30:58.656 clat (usec): min=10465, max=33833, avg=22373.34, stdev=5192.59 00:30:58.656 lat (usec): min=10488, max=33870, avg=22536.01, stdev=5231.25 00:30:58.656 clat percentiles (usec): 00:30:58.656 | 1.00th=[11338], 5.00th=[13960], 10.00th=[14353], 20.00th=[15139], 00:30:58.656 | 30.00th=[21365], 40.00th=[24249], 50.00th=[24511], 60.00th=[25035], 00:30:58.656 | 70.00th=[25297], 80.00th=[25560], 90.00th=[27395], 95.00th=[29230], 00:30:58.656 | 99.00th=[31327], 99.50th=[31589], 99.90th=[32375], 99.95th=[33817], 00:30:58.656 | 99.99th=[33817] 00:30:58.656 bw ( KiB/s): min=11600, max=12263, per=18.18%, avg=11931.50, stdev=468.81, samples=2 00:30:58.656 iops : min= 2900, max= 3065, avg=2982.50, stdev=116.67, samples=2 00:30:58.656 lat (usec) : 1000=0.02% 00:30:58.656 lat (msec) : 10=0.42%, 20=25.64%, 50=73.92% 00:30:58.656 cpu : usr=1.79%, sys=8.66%, ctx=999, majf=0, minf=9 00:30:58.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:30:58.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:58.656 issued rwts: total=2602,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:58.656 job3: (groupid=0, jobs=1): err= 0: pid=70203: Tue Oct 1 20:29:53 2024 00:30:58.656 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:30:58.656 slat (usec): min=6, max=7200, avg=103.96, stdev=500.12 00:30:58.656 clat (usec): min=10054, max=23794, avg=13831.79, stdev=2243.39 00:30:58.656 lat (usec): min=10506, max=23809, avg=13935.76, stdev=2219.27 00:30:58.656 clat percentiles (usec): 00:30:58.656 | 1.00th=[10552], 5.00th=[11469], 10.00th=[12649], 20.00th=[12911], 00:30:58.656 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13304], 60.00th=[13435], 00:30:58.656 | 70.00th=[13566], 80.00th=[13829], 90.00th=[15664], 95.00th=[19268], 00:30:58.656 | 99.00th=[22938], 99.50th=[23725], 99.90th=[23725], 99.95th=[23725], 00:30:58.656 | 99.99th=[23725] 00:30:58.656 write: IOPS=4729, BW=18.5MiB/s (19.4MB/s)(18.5MiB/1003msec); 0 zone resets 00:30:58.656 slat (usec): min=8, max=4097, avg=101.78, stdev=452.49 00:30:58.656 clat (usec): min=2308, max=24091, avg=13219.05, stdev=2439.99 00:30:58.656 lat (usec): min=2867, max=24115, avg=13320.83, stdev=2444.49 00:30:58.656 clat percentiles (usec): 00:30:58.656 | 1.00th=[ 7177], 5.00th=[10814], 10.00th=[11076], 20.00th=[11469], 00:30:58.656 | 30.00th=[11600], 40.00th=[11994], 50.00th=[13304], 60.00th=[13698], 00:30:58.656 | 70.00th=[14091], 80.00th=[14484], 90.00th=[15139], 95.00th=[18744], 00:30:58.656 | 99.00th=[20579], 99.50th=[23987], 99.90th=[23987], 99.95th=[23987], 00:30:58.656 | 99.99th=[23987] 00:30:58.656 bw ( KiB/s): min=17936, max=19000, per=28.14%, avg=18468.00, stdev=752.36, samples=2 00:30:58.656 iops : min= 4484, max= 4750, avg=4617.00, stdev=188.09, samples=2 00:30:58.656 lat (msec) : 4=0.18%, 10=0.60%, 20=96.05%, 50=3.17% 00:30:58.656 cpu : usr=4.99%, sys=12.48%, ctx=464, majf=0, minf=11 00:30:58.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:30:58.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:58.656 issued rwts: total=4608,4744,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:58.656 00:30:58.656 Run status group 0 (all jobs): 00:30:58.656 READ: bw=59.8MiB/s (62.7MB/s), 10.1MiB/s-21.7MiB/s (10.6MB/s-22.7MB/s), io=60.2MiB (63.1MB), run=1002-1007msec 00:30:58.656 WRITE: bw=64.1MiB/s (67.2MB/s), 11.9MiB/s-22.0MiB/s (12.5MB/s-23.0MB/s), io=64.5MiB (67.7MB), run=1002-1007msec 00:30:58.656 00:30:58.656 Disk stats (read/write): 00:30:58.656 nvme0n1: ios=4658/4903, merge=0/0, ticks=25034/23794, in_queue=48828, util=86.57% 00:30:58.656 nvme0n2: ios=2359/2560, merge=0/0, ticks=18240/18669, in_queue=36909, util=87.02% 00:30:58.656 nvme0n3: ios=2300/2560, merge=0/0, ticks=16735/17225, in_queue=33960, util=88.65% 00:30:58.656 nvme0n4: ios=3750/4096, merge=0/0, ticks=12015/11893, in_queue=23908, util=88.96% 00:30:58.656 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:30:58.656 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=70216 00:30:58.656 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:30:58.656 20:29:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:30:58.656 [global] 00:30:58.656 thread=1 00:30:58.656 invalidate=1 00:30:58.656 rw=read 00:30:58.656 time_based=1 00:30:58.656 runtime=10 00:30:58.656 ioengine=libaio 00:30:58.656 direct=1 00:30:58.656 bs=4096 00:30:58.656 iodepth=1 00:30:58.656 norandommap=1 00:30:58.656 numjobs=1 00:30:58.656 00:30:58.656 [job0] 00:30:58.656 filename=/dev/nvme0n1 00:30:58.656 [job1] 00:30:58.656 filename=/dev/nvme0n2 00:30:58.656 [job2] 00:30:58.656 filename=/dev/nvme0n3 00:30:58.656 [job3] 00:30:58.656 filename=/dev/nvme0n4 00:30:58.656 Could not set queue depth (nvme0n1) 00:30:58.656 Could not set queue depth (nvme0n2) 00:30:58.656 Could not set queue depth (nvme0n3) 00:30:58.656 Could not set queue depth (nvme0n4) 00:30:58.656 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:58.656 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:58.656 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:58.656 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:58.656 fio-3.35 00:30:58.656 Starting 4 threads 00:31:01.941 20:29:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:31:01.941 fio: pid=70263, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:01.941 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=42627072, buflen=4096 00:31:01.941 20:29:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:31:01.941 fio: pid=70262, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:01.941 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=41144320, buflen=4096 00:31:02.199 20:29:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:02.199 20:29:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:31:02.457 fio: pid=70260, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:02.457 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=45965312, buflen=4096 00:31:02.457 20:29:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:02.457 20:29:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:31:02.716 fio: pid=70261, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:02.716 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=51920896, buflen=4096 00:31:02.716 00:31:02.716 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70260: Tue Oct 1 20:29:57 2024 00:31:02.716 read: IOPS=3167, BW=12.4MiB/s (13.0MB/s)(43.8MiB/3543msec) 00:31:02.716 slat (usec): min=8, max=15751, avg=20.66, stdev=214.40 00:31:02.716 clat (usec): min=127, max=6448, avg=293.13, stdev=112.44 00:31:02.716 lat (usec): min=141, max=15987, avg=313.80, stdev=242.40 00:31:02.716 clat percentiles (usec): 00:31:02.716 | 1.00th=[ 182], 5.00th=[ 221], 10.00th=[ 237], 20.00th=[ 260], 00:31:02.716 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 281], 00:31:02.716 | 70.00th=[ 297], 80.00th=[ 322], 90.00th=[ 367], 95.00th=[ 408], 00:31:02.716 | 99.00th=[ 498], 99.50th=[ 537], 99.90th=[ 1090], 99.95th=[ 2180], 00:31:02.716 | 99.99th=[ 4948] 00:31:02.716 bw ( KiB/s): min= 9904, max=13992, per=27.51%, avg=12570.67, stdev=1402.50, samples=6 00:31:02.716 iops : min= 2476, max= 3498, avg=3142.67, stdev=350.63, samples=6 00:31:02.716 lat (usec) : 250=12.20%, 500=86.89%, 750=0.73%, 1000=0.05% 00:31:02.716 lat (msec) : 2=0.05%, 4=0.04%, 10=0.02% 00:31:02.716 cpu : usr=1.13%, sys=4.94%, ctx=11250, majf=0, minf=1 00:31:02.716 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:02.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.716 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.716 issued rwts: total=11223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:02.716 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:02.716 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70261: Tue Oct 1 20:29:57 2024 00:31:02.716 read: IOPS=3265, BW=12.8MiB/s (13.4MB/s)(49.5MiB/3882msec) 00:31:02.716 slat (usec): min=11, max=13971, avg=20.73, stdev=205.99 00:31:02.716 clat (usec): min=122, max=167158, avg=283.82, stdev=1683.32 00:31:02.716 lat (usec): min=141, max=167180, avg=304.55, stdev=1696.05 00:31:02.716 clat percentiles (usec): 00:31:02.716 | 1.00th=[ 133], 5.00th=[ 145], 10.00th=[ 169], 20.00th=[ 237], 00:31:02.716 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 273], 00:31:02.716 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 318], 95.00th=[ 338], 00:31:02.716 | 99.00th=[ 400], 99.50th=[ 465], 99.90th=[ 1270], 99.95th=[ 2278], 00:31:02.716 | 99.99th=[89654] 00:31:02.716 bw ( KiB/s): min= 8888, max=14520, per=28.42%, avg=12987.43, stdev=1948.62, samples=7 00:31:02.716 iops : min= 2222, max= 3630, avg=3246.86, stdev=487.15, samples=7 00:31:02.716 lat (usec) : 250=22.69%, 500=76.85%, 750=0.29%, 1000=0.04% 00:31:02.716 lat (msec) : 2=0.06%, 4=0.04%, 10=0.01%, 100=0.01%, 250=0.01% 00:31:02.716 cpu : usr=1.16%, sys=4.30%, ctx=12696, majf=0, minf=2 00:31:02.716 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:02.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.717 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.717 issued rwts: total=12677,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:02.717 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:02.717 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70262: Tue Oct 1 20:29:57 2024 00:31:02.717 read: IOPS=3105, BW=12.1MiB/s (12.7MB/s)(39.2MiB/3235msec) 00:31:02.717 slat (usec): min=8, max=7654, avg=18.72, stdev=100.16 00:31:02.717 clat (usec): min=49, max=6379, avg=301.34, stdev=116.52 00:31:02.717 lat (usec): min=197, max=7945, avg=320.05, stdev=154.36 00:31:02.717 clat percentiles (usec): 00:31:02.717 | 1.00th=[ 233], 5.00th=[ 255], 10.00th=[ 260], 20.00th=[ 265], 00:31:02.717 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 285], 00:31:02.717 | 70.00th=[ 302], 80.00th=[ 330], 90.00th=[ 371], 95.00th=[ 408], 00:31:02.717 | 99.00th=[ 494], 99.50th=[ 537], 99.90th=[ 1598], 99.95th=[ 2180], 00:31:02.717 | 99.99th=[ 4686] 00:31:02.717 bw ( KiB/s): min= 9904, max=13992, per=27.66%, avg=12640.00, stdev=1457.07, samples=6 00:31:02.717 iops : min= 2476, max= 3498, avg=3160.00, stdev=364.27, samples=6 00:31:02.717 lat (usec) : 50=0.01%, 250=2.49%, 500=96.58%, 750=0.75%, 1000=0.03% 00:31:02.717 lat (msec) : 2=0.06%, 4=0.06%, 10=0.02% 00:31:02.717 cpu : usr=1.30%, sys=4.64%, ctx=10073, majf=0, minf=2 00:31:02.717 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:02.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.717 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.717 issued rwts: total=10046,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:02.717 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:02.717 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70263: Tue Oct 1 20:29:57 2024 00:31:02.717 read: IOPS=3503, BW=13.7MiB/s (14.3MB/s)(40.7MiB/2971msec) 00:31:02.717 slat (nsec): min=8638, max=90764, avg=20805.38, stdev=6608.26 00:31:02.717 clat (usec): min=144, max=2319, avg=262.41, stdev=53.23 00:31:02.717 lat (usec): min=160, max=2348, avg=283.22, stdev=54.84 00:31:02.717 clat percentiles (usec): 00:31:02.717 | 1.00th=[ 155], 5.00th=[ 165], 10.00th=[ 229], 20.00th=[ 251], 00:31:02.717 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 269], 00:31:02.717 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 302], 95.00th=[ 322], 00:31:02.717 | 99.00th=[ 347], 99.50th=[ 371], 99.90th=[ 611], 99.95th=[ 644], 00:31:02.717 | 99.99th=[ 2212] 00:31:02.717 bw ( KiB/s): min=12120, max=13888, per=29.58%, avg=13516.80, stdev=781.03, samples=5 00:31:02.717 iops : min= 3030, max= 3472, avg=3379.20, stdev=195.26, samples=5 00:31:02.717 lat (usec) : 250=18.05%, 500=81.72%, 750=0.17%, 1000=0.02% 00:31:02.717 lat (msec) : 4=0.03% 00:31:02.717 cpu : usr=1.38%, sys=5.99%, ctx=10408, majf=0, minf=1 00:31:02.717 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:02.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.717 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.717 issued rwts: total=10408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:02.717 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:02.717 00:31:02.717 Run status group 0 (all jobs): 00:31:02.717 READ: bw=44.6MiB/s (46.8MB/s), 12.1MiB/s-13.7MiB/s (12.7MB/s-14.3MB/s), io=173MiB (182MB), run=2971-3882msec 00:31:02.717 00:31:02.717 Disk stats (read/write): 00:31:02.717 nvme0n1: ios=10534/0, merge=0/0, ticks=3087/0, in_queue=3087, util=95.17% 00:31:02.717 nvme0n2: ios=12641/0, merge=0/0, ticks=3544/0, in_queue=3544, util=95.69% 00:31:02.717 nvme0n3: ios=9725/0, merge=0/0, ticks=2894/0, in_queue=2894, util=96.25% 00:31:02.717 nvme0n4: ios=9899/0, merge=0/0, ticks=2681/0, in_queue=2681, util=96.73% 00:31:02.717 20:29:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:02.717 20:29:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:31:02.975 20:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:02.975 20:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:31:03.232 20:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:03.232 20:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:31:03.799 20:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:03.799 20:29:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:31:03.799 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:03.799 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:31:04.398 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:31:04.398 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 70216 00:31:04.398 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:31:04.398 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:04.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:04.398 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:04.398 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:31:04.398 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:31:04.398 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:04.398 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:04.398 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:31:04.398 nvmf hotplug test: fio failed as expected 00:31:04.398 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:31:04.398 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:31:04.398 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:31:04.398 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:04.657 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:31:04.657 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:31:04.657 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:31:04.657 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:31:04.657 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:31:04.657 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:04.657 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:31:04.657 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:04.657 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:31:04.657 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:04.657 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:04.657 rmmod nvme_tcp 00:31:04.657 rmmod nvme_fabrics 00:31:04.657 rmmod nvme_keyring 00:31:04.657 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:04.657 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:31:04.657 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:31:04.657 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 69731 ']' 00:31:04.657 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 69731 00:31:04.657 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 69731 ']' 00:31:04.657 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 69731 00:31:04.657 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:31:04.657 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:04.657 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69731 00:31:04.657 killing process with pid 69731 00:31:04.657 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:04.657 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:04.657 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69731' 00:31:04.657 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 69731 00:31:04.657 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 69731 00:31:04.915 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:04.915 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:04.915 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:04.915 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:31:04.915 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:04.915 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:31:04.915 20:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:31:04.915 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:04.915 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:31:04.915 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:31:04.915 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:31:04.915 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:31:04.915 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:31:04.915 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:31:04.915 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:31:04.915 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:31:04.915 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:31:04.915 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:31:04.915 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:31:04.915 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:31:04.915 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:05.173 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:05.173 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:31:05.173 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:05.173 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:05.173 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:05.173 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:31:05.173 ************************************ 00:31:05.173 END TEST nvmf_fio_target 00:31:05.173 ************************************ 00:31:05.173 00:31:05.173 real 0m20.118s 00:31:05.173 user 1m17.552s 00:31:05.173 sys 0m8.389s 00:31:05.173 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:05.173 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:05.173 20:30:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:31:05.173 20:30:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:05.173 20:30:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:05.173 20:30:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:31:05.173 ************************************ 00:31:05.173 START TEST nvmf_bdevio 00:31:05.173 ************************************ 00:31:05.173 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:31:05.173 * Looking for test storage... 00:31:05.173 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:05.173 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:05.173 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:31:05.173 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:05.431 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:05.431 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:05.431 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:05.431 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:05.431 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:31:05.431 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:31:05.431 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:31:05.431 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:31:05.431 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:31:05.431 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:31:05.431 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:31:05.431 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:05.431 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:31:05.431 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:31:05.431 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:05.431 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:05.431 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:31:05.431 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:31:05.431 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:05.431 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:31:05.431 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:31:05.431 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:31:05.431 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:31:05.431 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:05.431 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:31:05.431 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:31:05.431 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:05.431 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:05.431 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:31:05.431 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:05.431 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:05.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.431 --rc genhtml_branch_coverage=1 00:31:05.431 --rc genhtml_function_coverage=1 00:31:05.431 --rc genhtml_legend=1 00:31:05.431 --rc geninfo_all_blocks=1 00:31:05.431 --rc geninfo_unexecuted_blocks=1 00:31:05.431 00:31:05.431 ' 00:31:05.431 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:05.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.431 --rc genhtml_branch_coverage=1 00:31:05.431 --rc genhtml_function_coverage=1 00:31:05.431 --rc genhtml_legend=1 00:31:05.431 --rc geninfo_all_blocks=1 00:31:05.431 --rc geninfo_unexecuted_blocks=1 00:31:05.431 00:31:05.431 ' 00:31:05.431 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:05.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.431 --rc genhtml_branch_coverage=1 00:31:05.431 --rc genhtml_function_coverage=1 00:31:05.431 --rc genhtml_legend=1 00:31:05.431 --rc geninfo_all_blocks=1 00:31:05.431 --rc geninfo_unexecuted_blocks=1 00:31:05.431 00:31:05.431 ' 00:31:05.431 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:05.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.431 --rc genhtml_branch_coverage=1 00:31:05.431 --rc genhtml_function_coverage=1 00:31:05.431 --rc genhtml_legend=1 00:31:05.431 --rc geninfo_all_blocks=1 00:31:05.431 --rc geninfo_unexecuted_blocks=1 00:31:05.431 00:31:05.431 ' 00:31:05.431 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:05.431 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:31:05.431 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:05.432 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@458 -- # nvmf_veth_init 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:31:05.432 Cannot find device "nvmf_init_br" 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:31:05.432 Cannot find device "nvmf_init_br2" 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:31:05.432 Cannot find device "nvmf_tgt_br" 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:31:05.432 Cannot find device "nvmf_tgt_br2" 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:31:05.432 Cannot find device "nvmf_init_br" 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:31:05.432 Cannot find device "nvmf_init_br2" 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:31:05.432 Cannot find device "nvmf_tgt_br" 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:31:05.432 Cannot find device "nvmf_tgt_br2" 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:31:05.432 Cannot find device "nvmf_br" 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:31:05.432 Cannot find device "nvmf_init_if" 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:31:05.432 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:31:05.432 Cannot find device "nvmf_init_if2" 00:31:05.433 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:31:05.433 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:05.433 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:05.433 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:31:05.433 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:05.433 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:05.433 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:31:05.433 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:31:05.433 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:05.433 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:31:05.433 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:05.433 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:05.433 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:05.433 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:05.433 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:05.691 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:31:05.691 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:31:05.691 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:31:05.691 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:31:05.691 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:31:05.691 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:31:05.691 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:31:05.691 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:31:05.691 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:31:05.691 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:05.691 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:05.691 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:05.691 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:31:05.691 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:31:05.691 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:31:05.691 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:31:05.691 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:05.691 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:05.691 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:05.691 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:31:05.691 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:31:05.691 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:31:05.691 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:05.691 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:31:05.691 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:31:05.691 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:05.691 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:31:05.691 00:31:05.691 --- 10.0.0.3 ping statistics --- 00:31:05.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:05.691 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:31:05.691 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:31:05.691 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:31:05.691 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:31:05.691 00:31:05.691 --- 10.0.0.4 ping statistics --- 00:31:05.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:05.691 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:31:05.691 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:05.691 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:05.691 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:31:05.691 00:31:05.691 --- 10.0.0.1 ping statistics --- 00:31:05.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:05.691 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:31:05.691 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:31:05.691 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:05.691 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:31:05.691 00:31:05.691 --- 10.0.0.2 ping statistics --- 00:31:05.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:05.691 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:31:05.691 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:05.691 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # return 0 00:31:05.691 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:05.692 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:05.692 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:05.692 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:05.692 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:05.692 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:05.692 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:05.692 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:31:05.692 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:05.692 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:05.692 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:05.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:05.692 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=70642 00:31:05.692 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:31:05.692 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 70642 00:31:05.692 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 70642 ']' 00:31:05.692 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:05.692 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:05.692 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:05.692 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:05.692 20:30:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:05.692 [2024-10-01 20:30:00.909270] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:31:05.692 [2024-10-01 20:30:00.909366] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:05.949 [2024-10-01 20:30:01.050350] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:05.949 [2024-10-01 20:30:01.110047] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:05.949 [2024-10-01 20:30:01.110529] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:05.949 [2024-10-01 20:30:01.110924] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:05.949 [2024-10-01 20:30:01.111324] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:05.949 [2024-10-01 20:30:01.111544] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:05.949 [2024-10-01 20:30:01.111983] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:31:05.949 [2024-10-01 20:30:01.112218] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:31:05.949 [2024-10-01 20:30:01.112218] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:31:05.949 [2024-10-01 20:30:01.112141] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:31:06.207 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:06.207 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:31:06.207 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:06.207 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:06.207 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:06.207 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:06.207 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:06.207 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.207 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:06.207 [2024-10-01 20:30:01.284310] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:06.207 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.207 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:06.207 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.207 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:06.207 Malloc0 00:31:06.207 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.207 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:06.207 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.207 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:06.207 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.207 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:06.207 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.207 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:06.207 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.207 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:31:06.207 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.207 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:06.207 [2024-10-01 20:30:01.335801] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:31:06.207 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.207 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:31:06.207 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:31:06.207 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:31:06.207 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:31:06.207 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:06.207 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:06.207 { 00:31:06.207 "params": { 00:31:06.207 "name": "Nvme$subsystem", 00:31:06.207 "trtype": "$TEST_TRANSPORT", 00:31:06.207 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:06.207 "adrfam": "ipv4", 00:31:06.207 "trsvcid": "$NVMF_PORT", 00:31:06.207 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:06.207 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:06.207 "hdgst": ${hdgst:-false}, 00:31:06.207 "ddgst": ${ddgst:-false} 00:31:06.207 }, 00:31:06.207 "method": "bdev_nvme_attach_controller" 00:31:06.207 } 00:31:06.207 EOF 00:31:06.207 )") 00:31:06.207 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:31:06.207 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:31:06.207 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:31:06.207 20:30:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:31:06.207 "params": { 00:31:06.207 "name": "Nvme1", 00:31:06.207 "trtype": "tcp", 00:31:06.207 "traddr": "10.0.0.3", 00:31:06.207 "adrfam": "ipv4", 00:31:06.207 "trsvcid": "4420", 00:31:06.207 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:06.207 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:06.207 "hdgst": false, 00:31:06.207 "ddgst": false 00:31:06.207 }, 00:31:06.207 "method": "bdev_nvme_attach_controller" 00:31:06.207 }' 00:31:06.207 [2024-10-01 20:30:01.400886] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:31:06.207 [2024-10-01 20:30:01.400981] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70677 ] 00:31:06.467 [2024-10-01 20:30:01.572291] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:06.467 [2024-10-01 20:30:01.665371] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:31:06.467 [2024-10-01 20:30:01.665470] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:06.467 [2024-10-01 20:30:01.665487] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:06.725 I/O targets: 00:31:06.725 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:31:06.725 00:31:06.725 00:31:06.725 CUnit - A unit testing framework for C - Version 2.1-3 00:31:06.725 http://cunit.sourceforge.net/ 00:31:06.725 00:31:06.725 00:31:06.725 Suite: bdevio tests on: Nvme1n1 00:31:06.725 Test: blockdev write read block ...passed 00:31:06.725 Test: blockdev write zeroes read block ...passed 00:31:06.725 Test: blockdev write zeroes read no split ...passed 00:31:06.725 Test: blockdev write zeroes read split ...passed 00:31:06.725 Test: blockdev write zeroes read split partial ...passed 00:31:06.725 Test: blockdev reset ...[2024-10-01 20:30:01.968338] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:06.725 [2024-10-01 20:30:01.968868] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x210dd20 (9): Bad file descriptor 00:31:06.984 [2024-10-01 20:30:01.982348] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:06.984 passed 00:31:06.984 Test: blockdev write read 8 blocks ...passed 00:31:06.984 Test: blockdev write read size > 128k ...passed 00:31:06.984 Test: blockdev write read invalid size ...passed 00:31:06.984 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:06.984 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:06.984 Test: blockdev write read max offset ...passed 00:31:06.984 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:06.984 Test: blockdev writev readv 8 blocks ...passed 00:31:06.984 Test: blockdev writev readv 30 x 1block ...passed 00:31:06.984 Test: blockdev writev readv block ...passed 00:31:06.984 Test: blockdev writev readv size > 128k ...passed 00:31:06.984 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:06.984 Test: blockdev comparev and writev ...[2024-10-01 20:30:02.157234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:06.984 [2024-10-01 20:30:02.157282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:06.984 [2024-10-01 20:30:02.157304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:06.984 [2024-10-01 20:30:02.157315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:06.984 [2024-10-01 20:30:02.157803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:06.984 [2024-10-01 20:30:02.157832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:06.984 [2024-10-01 20:30:02.157850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:06.984 [2024-10-01 20:30:02.157861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:06.984 [2024-10-01 20:30:02.158336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:06.984 [2024-10-01 20:30:02.158365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:06.984 [2024-10-01 20:30:02.158383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:06.984 [2024-10-01 20:30:02.158393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:06.984 [2024-10-01 20:30:02.158823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:06.984 [2024-10-01 20:30:02.158855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:06.984 [2024-10-01 20:30:02.158873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:06.984 [2024-10-01 20:30:02.158884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:06.984 passed 00:31:07.242 Test: blockdev nvme passthru rw ...passed 00:31:07.243 Test: blockdev nvme passthru vendor specific ...[2024-10-01 20:30:02.242607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:07.243 [2024-10-01 20:30:02.242672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:07.243 [2024-10-01 20:30:02.242806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:07.243 [2024-10-01 20:30:02.242823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:07.243 [2024-10-01 20:30:02.242930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:07.243 [2024-10-01 20:30:02.242945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:07.243 passed 00:31:07.243 Test: blockdev nvme admin passthru ...[2024-10-01 20:30:02.243060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:07.243 [2024-10-01 20:30:02.243076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:07.243 passed 00:31:07.243 Test: blockdev copy ...passed 00:31:07.243 00:31:07.243 Run Summary: Type Total Ran Passed Failed Inactive 00:31:07.243 suites 1 1 n/a 0 0 00:31:07.243 tests 23 23 23 0 0 00:31:07.243 asserts 152 152 152 0 n/a 00:31:07.243 00:31:07.243 Elapsed time = 0.887 seconds 00:31:07.501 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:07.501 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.501 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:07.501 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.501 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:31:07.501 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:31:07.501 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:07.501 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:31:07.501 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:07.501 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:31:07.501 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:07.501 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:07.501 rmmod nvme_tcp 00:31:07.501 rmmod nvme_fabrics 00:31:07.501 rmmod nvme_keyring 00:31:07.501 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:07.501 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:31:07.501 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:31:07.501 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 70642 ']' 00:31:07.501 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 70642 00:31:07.501 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 70642 ']' 00:31:07.501 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 70642 00:31:07.501 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:31:07.501 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:07.501 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70642 00:31:07.501 killing process with pid 70642 00:31:07.501 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:31:07.501 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:31:07.501 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70642' 00:31:07.501 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 70642 00:31:07.501 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 70642 00:31:07.759 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:07.759 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:07.759 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:07.759 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:31:07.759 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:07.759 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:31:07.759 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:31:07.759 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:07.759 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:31:07.759 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:31:07.759 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:31:07.759 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:31:07.759 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:31:07.759 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:31:07.759 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:31:07.759 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:31:07.759 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:31:07.759 20:30:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:31:08.018 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:31:08.018 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:31:08.018 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:08.018 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:08.018 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:31:08.018 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:08.018 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:08.018 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:08.018 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:31:08.018 00:31:08.018 real 0m2.872s 00:31:08.018 user 0m9.160s 00:31:08.018 sys 0m0.833s 00:31:08.018 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:08.019 ************************************ 00:31:08.019 END TEST nvmf_bdevio 00:31:08.019 ************************************ 00:31:08.019 20:30:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:08.019 20:30:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:31:08.019 00:31:08.019 real 3m36.479s 00:31:08.019 user 11m37.553s 00:31:08.019 sys 1m1.355s 00:31:08.019 20:30:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:08.019 20:30:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:31:08.019 ************************************ 00:31:08.019 END TEST nvmf_target_core 00:31:08.019 ************************************ 00:31:08.019 20:30:03 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:31:08.019 20:30:03 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:08.019 20:30:03 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:08.019 20:30:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:08.019 ************************************ 00:31:08.019 START TEST nvmf_target_extra 00:31:08.019 ************************************ 00:31:08.019 20:30:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:31:08.278 * Looking for test storage... 00:31:08.278 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:31:08.278 20:30:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:08.278 20:30:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:31:08.278 20:30:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:08.278 20:30:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:08.278 20:30:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:08.278 20:30:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:08.278 20:30:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:08.278 20:30:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:31:08.278 20:30:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:31:08.278 20:30:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:31:08.278 20:30:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:31:08.278 20:30:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:31:08.278 20:30:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:31:08.278 20:30:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:31:08.278 20:30:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:08.278 20:30:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:31:08.278 20:30:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:31:08.278 20:30:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:08.278 20:30:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:08.278 20:30:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:31:08.278 20:30:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:31:08.278 20:30:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:08.278 20:30:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:31:08.278 20:30:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:31:08.278 20:30:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:31:08.278 20:30:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:31:08.278 20:30:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:08.278 20:30:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:31:08.278 20:30:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:31:08.278 20:30:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:08.278 20:30:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:08.278 20:30:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:31:08.278 20:30:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:08.278 20:30:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:08.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.278 --rc genhtml_branch_coverage=1 00:31:08.278 --rc genhtml_function_coverage=1 00:31:08.278 --rc genhtml_legend=1 00:31:08.278 --rc geninfo_all_blocks=1 00:31:08.279 --rc geninfo_unexecuted_blocks=1 00:31:08.279 00:31:08.279 ' 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:08.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.279 --rc genhtml_branch_coverage=1 00:31:08.279 --rc genhtml_function_coverage=1 00:31:08.279 --rc genhtml_legend=1 00:31:08.279 --rc geninfo_all_blocks=1 00:31:08.279 --rc geninfo_unexecuted_blocks=1 00:31:08.279 00:31:08.279 ' 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:08.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.279 --rc genhtml_branch_coverage=1 00:31:08.279 --rc genhtml_function_coverage=1 00:31:08.279 --rc genhtml_legend=1 00:31:08.279 --rc geninfo_all_blocks=1 00:31:08.279 --rc geninfo_unexecuted_blocks=1 00:31:08.279 00:31:08.279 ' 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:08.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.279 --rc genhtml_branch_coverage=1 00:31:08.279 --rc genhtml_function_coverage=1 00:31:08.279 --rc genhtml_legend=1 00:31:08.279 --rc geninfo_all_blocks=1 00:31:08.279 --rc geninfo_unexecuted_blocks=1 00:31:08.279 00:31:08.279 ' 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:08.279 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:31:08.279 ************************************ 00:31:08.279 START TEST nvmf_example 00:31:08.279 ************************************ 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:31:08.279 * Looking for test storage... 00:31:08.279 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lcov --version 00:31:08.279 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:08.610 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:08.610 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:08.610 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:08.610 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:08.610 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:31:08.610 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:31:08.610 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:31:08.610 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:31:08.610 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:31:08.610 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:31:08.610 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:31:08.610 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:08.610 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:31:08.610 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:31:08.610 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:08.610 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:08.610 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:31:08.610 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:31:08.610 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:08.610 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:31:08.610 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:31:08.610 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:31:08.610 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:31:08.610 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:08.610 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:31:08.610 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:31:08.610 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:08.610 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:08.610 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:31:08.610 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:08.610 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:08.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.610 --rc genhtml_branch_coverage=1 00:31:08.610 --rc genhtml_function_coverage=1 00:31:08.610 --rc genhtml_legend=1 00:31:08.610 --rc geninfo_all_blocks=1 00:31:08.611 --rc geninfo_unexecuted_blocks=1 00:31:08.611 00:31:08.611 ' 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:08.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.611 --rc genhtml_branch_coverage=1 00:31:08.611 --rc genhtml_function_coverage=1 00:31:08.611 --rc genhtml_legend=1 00:31:08.611 --rc geninfo_all_blocks=1 00:31:08.611 --rc geninfo_unexecuted_blocks=1 00:31:08.611 00:31:08.611 ' 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:08.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.611 --rc genhtml_branch_coverage=1 00:31:08.611 --rc genhtml_function_coverage=1 00:31:08.611 --rc genhtml_legend=1 00:31:08.611 --rc geninfo_all_blocks=1 00:31:08.611 --rc geninfo_unexecuted_blocks=1 00:31:08.611 00:31:08.611 ' 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:08.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.611 --rc genhtml_branch_coverage=1 00:31:08.611 --rc genhtml_function_coverage=1 00:31:08.611 --rc genhtml_legend=1 00:31:08.611 --rc geninfo_all_blocks=1 00:31:08.611 --rc geninfo_unexecuted_blocks=1 00:31:08.611 00:31:08.611 ' 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:08.611 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:31:08.611 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@458 -- # nvmf_veth_init 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:31:08.612 Cannot find device "nvmf_init_br" 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # true 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:31:08.612 Cannot find device "nvmf_init_br2" 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # true 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:31:08.612 Cannot find device "nvmf_tgt_br" 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # true 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:31:08.612 Cannot find device "nvmf_tgt_br2" 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # true 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:31:08.612 Cannot find device "nvmf_init_br" 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # true 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:31:08.612 Cannot find device "nvmf_init_br2" 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # true 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:31:08.612 Cannot find device "nvmf_tgt_br" 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # true 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:31:08.612 Cannot find device "nvmf_tgt_br2" 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # true 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:31:08.612 Cannot find device "nvmf_br" 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # true 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:31:08.612 Cannot find device "nvmf_init_if" 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # true 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:31:08.612 Cannot find device "nvmf_init_if2" 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # true 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:08.612 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # true 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:08.612 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # true 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:31:08.612 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:08.876 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:08.876 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:08.876 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:08.876 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:08.876 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:31:08.876 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:31:08.876 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:31:08.876 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:31:08.876 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:31:08.876 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:31:08.876 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:31:08.876 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:31:08.876 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:31:08.876 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:08.876 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:08.876 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:08.876 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:31:08.876 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:31:08.876 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:31:08.876 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:31:08.876 20:30:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:08.876 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:08.876 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:08.876 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:31:08.876 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:31:08.876 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:31:08.876 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:08.876 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:31:08.876 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:31:08.876 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:08.876 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:31:08.876 00:31:08.876 --- 10.0.0.3 ping statistics --- 00:31:08.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:08.876 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:31:08.876 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:31:08.876 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:31:08.876 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:31:08.876 00:31:08.876 --- 10.0.0.4 ping statistics --- 00:31:08.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:08.876 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:31:08.876 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:08.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:08.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:31:08.876 00:31:08.876 --- 10.0.0.1 ping statistics --- 00:31:08.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:08.876 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:31:08.876 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:31:08.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:08.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:31:08.876 00:31:08.876 --- 10.0.0.2 ping statistics --- 00:31:08.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:08.876 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:31:08.876 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:08.876 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@459 -- # return 0 00:31:08.876 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:08.876 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:08.876 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:08.876 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:08.876 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:08.876 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:08.876 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:08.876 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:31:08.876 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:31:08.876 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:08.876 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:31:08.876 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:31:08.876 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:31:08.876 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=70975 00:31:08.876 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:08.876 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:31:08.876 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 70975 00:31:08.876 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 70975 ']' 00:31:08.876 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:08.876 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:08.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:08.876 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:08.876 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:08.876 20:30:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:31:10.251 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:10.251 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:31:10.251 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:31:10.251 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:10.251 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:31:10.251 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:10.251 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.251 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:31:10.251 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.251 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:31:10.251 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.251 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:31:10.251 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.251 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:31:10.251 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:10.251 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.251 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:31:10.251 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.251 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:31:10.251 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:10.252 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.252 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:31:10.252 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.252 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:31:10.252 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.252 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:31:10.252 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.252 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:31:10.252 20:30:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:22.455 Initializing NVMe Controllers 00:31:22.455 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:31:22.455 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:22.455 Initialization complete. Launching workers. 00:31:22.455 ======================================================== 00:31:22.455 Latency(us) 00:31:22.455 Device Information : IOPS MiB/s Average min max 00:31:22.455 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14589.13 56.99 4386.68 798.05 22223.55 00:31:22.455 ======================================================== 00:31:22.455 Total : 14589.13 56.99 4386.68 798.05 22223.55 00:31:22.455 00:31:22.455 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:31:22.455 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:31:22.455 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:22.455 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:31:22.455 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:22.455 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:31:22.455 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:22.455 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:22.455 rmmod nvme_tcp 00:31:22.455 rmmod nvme_fabrics 00:31:22.455 rmmod nvme_keyring 00:31:22.455 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:22.455 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:31:22.455 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:31:22.455 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@515 -- # '[' -n 70975 ']' 00:31:22.455 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # killprocess 70975 00:31:22.455 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 70975 ']' 00:31:22.455 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 70975 00:31:22.455 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:31:22.455 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:22.455 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70975 00:31:22.455 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:31:22.455 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:31:22.455 killing process with pid 70975 00:31:22.455 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70975' 00:31:22.455 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 70975 00:31:22.455 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 70975 00:31:22.455 nvmf threads initialize successfully 00:31:22.455 bdev subsystem init successfully 00:31:22.455 created a nvmf target service 00:31:22.455 create targets's poll groups done 00:31:22.455 all subsystems of target started 00:31:22.455 nvmf target is running 00:31:22.455 all subsystems of target stopped 00:31:22.455 destroy targets's poll groups done 00:31:22.455 destroyed the nvmf target service 00:31:22.455 bdev subsystem finish successfully 00:31:22.455 nvmf threads destroy successfully 00:31:22.455 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:22.455 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:22.455 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:22.455 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:31:22.455 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-save 00:31:22.455 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-restore 00:31:22.455 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:22.455 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:22.455 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:31:22.456 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:31:22.456 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:31:22.456 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:31:22.456 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:31:22.456 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:31:22.456 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:31:22.456 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:31:22.456 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:31:22.456 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:31:22.456 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:31:22.456 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:31:22.456 20:30:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@246 -- # remove_spdk_ns 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@300 -- # return 0 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:31:22.456 ************************************ 00:31:22.456 END TEST nvmf_example 00:31:22.456 ************************************ 00:31:22.456 00:31:22.456 real 0m12.648s 00:31:22.456 user 0m44.667s 00:31:22.456 sys 0m2.019s 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:31:22.456 ************************************ 00:31:22.456 START TEST nvmf_filesystem 00:31:22.456 ************************************ 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:31:22.456 * Looking for test storage... 00:31:22.456 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:22.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.456 --rc genhtml_branch_coverage=1 00:31:22.456 --rc genhtml_function_coverage=1 00:31:22.456 --rc genhtml_legend=1 00:31:22.456 --rc geninfo_all_blocks=1 00:31:22.456 --rc geninfo_unexecuted_blocks=1 00:31:22.456 00:31:22.456 ' 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:22.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.456 --rc genhtml_branch_coverage=1 00:31:22.456 --rc genhtml_function_coverage=1 00:31:22.456 --rc genhtml_legend=1 00:31:22.456 --rc geninfo_all_blocks=1 00:31:22.456 --rc geninfo_unexecuted_blocks=1 00:31:22.456 00:31:22.456 ' 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:22.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.456 --rc genhtml_branch_coverage=1 00:31:22.456 --rc genhtml_function_coverage=1 00:31:22.456 --rc genhtml_legend=1 00:31:22.456 --rc geninfo_all_blocks=1 00:31:22.456 --rc geninfo_unexecuted_blocks=1 00:31:22.456 00:31:22.456 ' 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:22.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.456 --rc genhtml_branch_coverage=1 00:31:22.456 --rc genhtml_function_coverage=1 00:31:22.456 --rc genhtml_legend=1 00:31:22.456 --rc geninfo_all_blocks=1 00:31:22.456 --rc geninfo_unexecuted_blocks=1 00:31:22.456 00:31:22.456 ' 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:31:22.456 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=n 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=y 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=y 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:31:22.457 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:31:22.457 #define SPDK_CONFIG_H 00:31:22.457 #define SPDK_CONFIG_AIO_FSDEV 1 00:31:22.457 #define SPDK_CONFIG_APPS 1 00:31:22.457 #define SPDK_CONFIG_ARCH native 00:31:22.457 #undef SPDK_CONFIG_ASAN 00:31:22.457 #define SPDK_CONFIG_AVAHI 1 00:31:22.457 #undef SPDK_CONFIG_CET 00:31:22.457 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:31:22.457 #define SPDK_CONFIG_COVERAGE 1 00:31:22.457 #define SPDK_CONFIG_CROSS_PREFIX 00:31:22.457 #undef SPDK_CONFIG_CRYPTO 00:31:22.457 #undef SPDK_CONFIG_CRYPTO_MLX5 00:31:22.457 #undef SPDK_CONFIG_CUSTOMOCF 00:31:22.457 #undef SPDK_CONFIG_DAOS 00:31:22.457 #define SPDK_CONFIG_DAOS_DIR 00:31:22.457 #define SPDK_CONFIG_DEBUG 1 00:31:22.457 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:31:22.457 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:31:22.457 #define SPDK_CONFIG_DPDK_INC_DIR 00:31:22.457 #define SPDK_CONFIG_DPDK_LIB_DIR 00:31:22.457 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:31:22.457 #undef SPDK_CONFIG_DPDK_UADK 00:31:22.457 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:31:22.457 #define SPDK_CONFIG_EXAMPLES 1 00:31:22.457 #undef SPDK_CONFIG_FC 00:31:22.457 #define SPDK_CONFIG_FC_PATH 00:31:22.457 #define SPDK_CONFIG_FIO_PLUGIN 1 00:31:22.457 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:31:22.457 #define SPDK_CONFIG_FSDEV 1 00:31:22.457 #undef SPDK_CONFIG_FUSE 00:31:22.457 #undef SPDK_CONFIG_FUZZER 00:31:22.457 #define SPDK_CONFIG_FUZZER_LIB 00:31:22.457 #define SPDK_CONFIG_GOLANG 1 00:31:22.457 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:31:22.457 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:31:22.457 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:31:22.457 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:31:22.457 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:31:22.458 #undef SPDK_CONFIG_HAVE_LIBBSD 00:31:22.458 #undef SPDK_CONFIG_HAVE_LZ4 00:31:22.458 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:31:22.458 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:31:22.458 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:31:22.458 #define SPDK_CONFIG_IDXD 1 00:31:22.458 #define SPDK_CONFIG_IDXD_KERNEL 1 00:31:22.458 #undef SPDK_CONFIG_IPSEC_MB 00:31:22.458 #define SPDK_CONFIG_IPSEC_MB_DIR 00:31:22.458 #define SPDK_CONFIG_ISAL 1 00:31:22.458 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:31:22.458 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:31:22.458 #define SPDK_CONFIG_LIBDIR 00:31:22.458 #undef SPDK_CONFIG_LTO 00:31:22.458 #define SPDK_CONFIG_MAX_LCORES 128 00:31:22.458 #define SPDK_CONFIG_NVME_CUSE 1 00:31:22.458 #undef SPDK_CONFIG_OCF 00:31:22.458 #define SPDK_CONFIG_OCF_PATH 00:31:22.458 #define SPDK_CONFIG_OPENSSL_PATH 00:31:22.458 #undef SPDK_CONFIG_PGO_CAPTURE 00:31:22.458 #define SPDK_CONFIG_PGO_DIR 00:31:22.458 #undef SPDK_CONFIG_PGO_USE 00:31:22.458 #define SPDK_CONFIG_PREFIX /usr/local 00:31:22.458 #undef SPDK_CONFIG_RAID5F 00:31:22.458 #undef SPDK_CONFIG_RBD 00:31:22.458 #define SPDK_CONFIG_RDMA 1 00:31:22.458 #define SPDK_CONFIG_RDMA_PROV verbs 00:31:22.458 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:31:22.458 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:31:22.458 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:31:22.458 #define SPDK_CONFIG_SHARED 1 00:31:22.458 #undef SPDK_CONFIG_SMA 00:31:22.458 #define SPDK_CONFIG_TESTS 1 00:31:22.458 #undef SPDK_CONFIG_TSAN 00:31:22.458 #define SPDK_CONFIG_UBLK 1 00:31:22.458 #define SPDK_CONFIG_UBSAN 1 00:31:22.458 #undef SPDK_CONFIG_UNIT_TESTS 00:31:22.458 #undef SPDK_CONFIG_URING 00:31:22.458 #define SPDK_CONFIG_URING_PATH 00:31:22.458 #undef SPDK_CONFIG_URING_ZNS 00:31:22.458 #define SPDK_CONFIG_USDT 1 00:31:22.458 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:31:22.458 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:31:22.458 #undef SPDK_CONFIG_VFIO_USER 00:31:22.458 #define SPDK_CONFIG_VFIO_USER_DIR 00:31:22.458 #define SPDK_CONFIG_VHOST 1 00:31:22.458 #define SPDK_CONFIG_VIRTIO 1 00:31:22.458 #undef SPDK_CONFIG_VTUNE 00:31:22.458 #define SPDK_CONFIG_VTUNE_DIR 00:31:22.458 #define SPDK_CONFIG_WERROR 1 00:31:22.458 #define SPDK_CONFIG_WPDK_DIR 00:31:22.458 #undef SPDK_CONFIG_XNVME 00:31:22.458 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:31:22.458 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 1 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:31:22.459 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j10 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 71250 ]] 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 71250 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.wj4xay 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.wj4xay/tests/target /tmp/spdk.wj4xay 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/vda5 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=btrfs 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=13986353152 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=20314062848 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5582254080 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=devtmpfs 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:31:22.460 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4194304 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=4194304 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=6256394240 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=6266425344 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=2486431744 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=2506571776 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=20140032 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/vda5 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=btrfs 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=13986353152 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=20314062848 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5582254080 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=6266241024 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=6266425344 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=184320 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/vda2 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext4 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=840085504 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=1012768768 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=103477248 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/vda3 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=vfat 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=91617280 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=104607744 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12990464 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=1253269504 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=1253281792 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_3/fedora39-libvirt/output 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=fuse.sshfs 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=96505925632 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=105088212992 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=3196854272 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:31:22.461 * Looking for test storage... 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/home 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=13986353152 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ btrfs == tmpfs ]] 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ btrfs == ramfs ]] 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ /home == / ]] 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:22.461 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1668 -- # set -o errtrace 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1673 -- # true 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1675 -- # xtrace_fd 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:31:22.461 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:22.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.462 --rc genhtml_branch_coverage=1 00:31:22.462 --rc genhtml_function_coverage=1 00:31:22.462 --rc genhtml_legend=1 00:31:22.462 --rc geninfo_all_blocks=1 00:31:22.462 --rc geninfo_unexecuted_blocks=1 00:31:22.462 00:31:22.462 ' 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:22.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.462 --rc genhtml_branch_coverage=1 00:31:22.462 --rc genhtml_function_coverage=1 00:31:22.462 --rc genhtml_legend=1 00:31:22.462 --rc geninfo_all_blocks=1 00:31:22.462 --rc geninfo_unexecuted_blocks=1 00:31:22.462 00:31:22.462 ' 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:22.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.462 --rc genhtml_branch_coverage=1 00:31:22.462 --rc genhtml_function_coverage=1 00:31:22.462 --rc genhtml_legend=1 00:31:22.462 --rc geninfo_all_blocks=1 00:31:22.462 --rc geninfo_unexecuted_blocks=1 00:31:22.462 00:31:22.462 ' 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:22.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.462 --rc genhtml_branch_coverage=1 00:31:22.462 --rc genhtml_function_coverage=1 00:31:22.462 --rc genhtml_legend=1 00:31:22.462 --rc geninfo_all_blocks=1 00:31:22.462 --rc geninfo_unexecuted_blocks=1 00:31:22.462 00:31:22.462 ' 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:22.462 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@458 -- # nvmf_veth_init 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:31:22.462 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:31:22.463 Cannot find device "nvmf_init_br" 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:31:22.463 Cannot find device "nvmf_init_br2" 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:31:22.463 Cannot find device "nvmf_tgt_br" 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # true 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:31:22.463 Cannot find device "nvmf_tgt_br2" 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # true 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:31:22.463 Cannot find device "nvmf_init_br" 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # true 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:31:22.463 Cannot find device "nvmf_init_br2" 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # true 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:31:22.463 Cannot find device "nvmf_tgt_br" 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # true 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:31:22.463 Cannot find device "nvmf_tgt_br2" 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # true 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:31:22.463 Cannot find device "nvmf_br" 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # true 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:31:22.463 Cannot find device "nvmf_init_if" 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # true 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:31:22.463 Cannot find device "nvmf_init_if2" 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # true 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:22.463 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # true 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:22.463 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # true 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:31:22.463 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:22.463 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:31:22.463 00:31:22.463 --- 10.0.0.3 ping statistics --- 00:31:22.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:22.463 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:31:22.463 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:31:22.463 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.035 ms 00:31:22.463 00:31:22.463 --- 10.0.0.4 ping statistics --- 00:31:22.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:22.463 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:22.463 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:22.463 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:31:22.463 00:31:22.463 --- 10.0.0.1 ping statistics --- 00:31:22.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:22.463 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:31:22.463 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:22.463 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:31:22.463 00:31:22.463 --- 10.0.0.2 ping statistics --- 00:31:22.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:22.463 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@459 -- # return 0 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:22.463 20:30:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:31:22.463 ************************************ 00:31:22.463 START TEST nvmf_filesystem_no_in_capsule 00:31:22.463 ************************************ 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=71450 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:22.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 71450 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 71450 ']' 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:31:22.464 [2024-10-01 20:30:17.063411] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:31:22.464 [2024-10-01 20:30:17.063504] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:22.464 [2024-10-01 20:30:17.200465] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:22.464 [2024-10-01 20:30:17.261970] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:22.464 [2024-10-01 20:30:17.262030] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:22.464 [2024-10-01 20:30:17.262043] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:22.464 [2024-10-01 20:30:17.262051] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:22.464 [2024-10-01 20:30:17.262059] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:22.464 [2024-10-01 20:30:17.262150] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:31:22.464 [2024-10-01 20:30:17.262273] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:22.464 [2024-10-01 20:30:17.263170] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:31:22.464 [2024-10-01 20:30:17.263182] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:31:22.464 [2024-10-01 20:30:17.481339] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:31:22.464 Malloc1 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:31:22.464 [2024-10-01 20:30:17.596719] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.464 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:31:22.464 { 00:31:22.464 "aliases": [ 00:31:22.464 "032a4ba9-5a01-4406-a346-b7637082dcf9" 00:31:22.464 ], 00:31:22.464 "assigned_rate_limits": { 00:31:22.464 "r_mbytes_per_sec": 0, 00:31:22.464 "rw_ios_per_sec": 0, 00:31:22.464 "rw_mbytes_per_sec": 0, 00:31:22.464 "w_mbytes_per_sec": 0 00:31:22.464 }, 00:31:22.464 "block_size": 512, 00:31:22.464 "claim_type": "exclusive_write", 00:31:22.464 "claimed": true, 00:31:22.464 "driver_specific": {}, 00:31:22.464 "memory_domains": [ 00:31:22.464 { 00:31:22.464 "dma_device_id": "system", 00:31:22.464 "dma_device_type": 1 00:31:22.464 }, 00:31:22.464 { 00:31:22.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:22.464 "dma_device_type": 2 00:31:22.464 } 00:31:22.464 ], 00:31:22.464 "name": "Malloc1", 00:31:22.464 "num_blocks": 1048576, 00:31:22.464 "product_name": "Malloc disk", 00:31:22.464 "supported_io_types": { 00:31:22.464 "abort": true, 00:31:22.464 "compare": false, 00:31:22.464 "compare_and_write": false, 00:31:22.464 "copy": true, 00:31:22.464 "flush": true, 00:31:22.464 "get_zone_info": false, 00:31:22.464 "nvme_admin": false, 00:31:22.464 "nvme_io": false, 00:31:22.464 "nvme_io_md": false, 00:31:22.464 "nvme_iov_md": false, 00:31:22.464 "read": true, 00:31:22.464 "reset": true, 00:31:22.464 "seek_data": false, 00:31:22.464 "seek_hole": false, 00:31:22.464 "unmap": true, 00:31:22.464 "write": true, 00:31:22.464 "write_zeroes": true, 00:31:22.464 "zcopy": true, 00:31:22.464 "zone_append": false, 00:31:22.464 "zone_management": false 00:31:22.464 }, 00:31:22.464 "uuid": "032a4ba9-5a01-4406-a346-b7637082dcf9", 00:31:22.465 "zoned": false 00:31:22.465 } 00:31:22.465 ]' 00:31:22.465 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:31:22.465 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:31:22.465 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:31:22.723 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:31:22.723 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:31:22.723 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:31:22.723 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:31:22.723 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid=e1b29161-2f6e-4e88-9903-c5274a848703 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:31:22.723 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:31:22.723 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:31:22.723 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:31:22.723 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:31:22.723 20:30:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:31:25.251 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:31:25.251 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:31:25.251 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:31:25.251 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:31:25.252 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:31:25.252 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:31:25.252 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:31:25.252 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:31:25.252 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:31:25.252 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:31:25.252 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:31:25.252 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:25.252 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:31:25.252 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:31:25.252 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:31:25.252 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:31:25.252 20:30:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:31:25.252 20:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:31:25.252 20:30:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:31:26.229 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:31:26.229 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:31:26.229 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:26.229 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:26.229 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:31:26.229 ************************************ 00:31:26.229 START TEST filesystem_ext4 00:31:26.229 ************************************ 00:31:26.229 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:31:26.229 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:31:26.229 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:31:26.229 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:31:26.229 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:31:26.229 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:31:26.229 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:31:26.229 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:31:26.229 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:31:26.229 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:31:26.229 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:31:26.229 mke2fs 1.47.0 (5-Feb-2023) 00:31:26.229 Discarding device blocks: 0/522240 done 00:31:26.229 Creating filesystem with 522240 1k blocks and 130560 inodes 00:31:26.229 Filesystem UUID: fdda6382-ef12-408f-99f0-c508b3f17aaa 00:31:26.229 Superblock backups stored on blocks: 00:31:26.229 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:31:26.229 00:31:26.229 Allocating group tables: 0/64 done 00:31:26.229 Writing inode tables: 0/64 done 00:31:26.229 Creating journal (8192 blocks): done 00:31:26.229 Writing superblocks and filesystem accounting information: 0/64 done 00:31:26.229 00:31:26.229 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:31:26.229 20:30:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:31:31.493 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:31:31.493 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:31:31.493 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:31:31.493 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:31:31.493 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:31:31.493 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:31:31.493 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 71450 00:31:31.493 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:31:31.493 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:31:31.493 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:31:31.493 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:31:31.493 ************************************ 00:31:31.493 END TEST filesystem_ext4 00:31:31.493 ************************************ 00:31:31.493 00:31:31.493 real 0m5.566s 00:31:31.493 user 0m0.022s 00:31:31.493 sys 0m0.057s 00:31:31.493 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:31.493 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:31:31.493 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:31:31.493 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:31.493 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:31.493 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:31:31.493 ************************************ 00:31:31.493 START TEST filesystem_btrfs 00:31:31.493 ************************************ 00:31:31.493 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:31:31.493 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:31:31.493 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:31:31.493 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:31:31.493 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:31:31.493 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:31:31.493 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:31:31.493 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:31:31.493 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:31:31.493 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:31:31.493 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:31:31.752 btrfs-progs v6.8.1 00:31:31.752 See https://btrfs.readthedocs.io for more information. 00:31:31.752 00:31:31.752 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:31:31.752 NOTE: several default settings have changed in version 5.15, please make sure 00:31:31.752 this does not affect your deployments: 00:31:31.752 - DUP for metadata (-m dup) 00:31:31.752 - enabled no-holes (-O no-holes) 00:31:31.752 - enabled free-space-tree (-R free-space-tree) 00:31:31.752 00:31:31.752 Label: (null) 00:31:31.752 UUID: d3d50cd2-63bf-411a-8662-7c6af49c3307 00:31:31.752 Node size: 16384 00:31:31.752 Sector size: 4096 (CPU page size: 4096) 00:31:31.752 Filesystem size: 510.00MiB 00:31:31.752 Block group profiles: 00:31:31.752 Data: single 8.00MiB 00:31:31.752 Metadata: DUP 32.00MiB 00:31:31.752 System: DUP 8.00MiB 00:31:31.752 SSD detected: yes 00:31:31.752 Zoned device: no 00:31:31.752 Features: extref, skinny-metadata, no-holes, free-space-tree 00:31:31.752 Checksum: crc32c 00:31:31.752 Number of devices: 1 00:31:31.752 Devices: 00:31:31.752 ID SIZE PATH 00:31:31.752 1 510.00MiB /dev/nvme0n1p1 00:31:31.752 00:31:31.752 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:31:31.752 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:31:31.752 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:31:31.752 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:31:31.752 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:31:31.752 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:31:31.752 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:31:31.752 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:31:31.752 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 71450 00:31:31.752 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:31:31.752 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:31:31.752 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:31:31.752 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:31:31.752 ************************************ 00:31:31.752 END TEST filesystem_btrfs 00:31:31.752 ************************************ 00:31:31.752 00:31:31.752 real 0m0.189s 00:31:31.752 user 0m0.026s 00:31:31.752 sys 0m0.053s 00:31:31.752 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:31.752 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:31:31.752 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:31:31.752 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:31.752 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:31.752 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:31:31.752 ************************************ 00:31:31.752 START TEST filesystem_xfs 00:31:31.752 ************************************ 00:31:31.752 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:31:31.752 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:31:31.752 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:31:31.752 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:31:31.752 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:31:31.752 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:31:31.752 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:31:31.752 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:31:31.752 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:31:31.752 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:31:31.752 20:30:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:31:31.752 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:31:31.752 = sectsz=512 attr=2, projid32bit=1 00:31:31.752 = crc=1 finobt=1, sparse=1, rmapbt=0 00:31:31.752 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:31:31.752 data = bsize=4096 blocks=130560, imaxpct=25 00:31:31.752 = sunit=0 swidth=0 blks 00:31:31.752 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:31:31.753 log =internal log bsize=4096 blocks=16384, version=2 00:31:31.753 = sectsz=512 sunit=0 blks, lazy-count=1 00:31:31.753 realtime =none extsz=4096 blocks=0, rtextents=0 00:31:32.686 Discarding blocks...Done. 00:31:32.686 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:31:32.686 20:30:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:31:35.211 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:31:35.211 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:31:35.211 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:31:35.211 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:31:35.211 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:31:35.211 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:31:35.211 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 71450 00:31:35.211 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:31:35.211 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:31:35.211 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:31:35.211 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:31:35.211 ************************************ 00:31:35.211 END TEST filesystem_xfs 00:31:35.211 ************************************ 00:31:35.211 00:31:35.211 real 0m3.134s 00:31:35.211 user 0m0.019s 00:31:35.211 sys 0m0.057s 00:31:35.211 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:35.211 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:31:35.211 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:31:35.211 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:31:35.211 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:35.211 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:35.211 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:35.211 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:31:35.211 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:31:35.211 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:35.211 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:31:35.211 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:35.211 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:31:35.211 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:35.211 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.211 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:31:35.211 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.211 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:31:35.211 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 71450 00:31:35.211 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 71450 ']' 00:31:35.211 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 71450 00:31:35.211 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:31:35.211 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:35.211 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71450 00:31:35.211 killing process with pid 71450 00:31:35.211 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:35.211 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:35.211 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71450' 00:31:35.211 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 71450 00:31:35.211 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 71450 00:31:35.469 ************************************ 00:31:35.470 END TEST nvmf_filesystem_no_in_capsule 00:31:35.470 ************************************ 00:31:35.470 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:31:35.470 00:31:35.470 real 0m13.583s 00:31:35.470 user 0m51.756s 00:31:35.470 sys 0m1.944s 00:31:35.470 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:35.470 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:31:35.470 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:31:35.470 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:35.470 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:35.470 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:31:35.470 ************************************ 00:31:35.470 START TEST nvmf_filesystem_in_capsule 00:31:35.470 ************************************ 00:31:35.470 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:31:35.470 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:31:35.470 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:31:35.470 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:35.470 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:35.470 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:31:35.470 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=71797 00:31:35.470 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 71797 00:31:35.470 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:35.470 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 71797 ']' 00:31:35.470 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:35.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:35.470 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:35.470 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:35.470 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:35.470 20:30:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:31:35.470 [2024-10-01 20:30:30.706866] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:31:35.470 [2024-10-01 20:30:30.706970] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:35.729 [2024-10-01 20:30:30.849673] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:35.729 [2024-10-01 20:30:30.911135] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:35.729 [2024-10-01 20:30:30.911186] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:35.729 [2024-10-01 20:30:30.911198] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:35.729 [2024-10-01 20:30:30.911206] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:35.729 [2024-10-01 20:30:30.911213] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:35.729 [2024-10-01 20:30:30.911274] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:31:35.729 [2024-10-01 20:30:30.911362] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:35.729 [2024-10-01 20:30:30.911497] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:31:35.729 [2024-10-01 20:30:30.911501] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:36.662 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:36.662 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:31:36.662 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:36.662 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:36.662 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:31:36.662 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:36.662 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:31:36.662 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:31:36.662 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.662 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:31:36.662 [2024-10-01 20:30:31.747258] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:36.662 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.662 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:31:36.662 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.662 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:31:36.662 Malloc1 00:31:36.662 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.662 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:36.663 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.663 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:31:36.663 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.663 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:36.663 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.663 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:31:36.663 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.663 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:31:36.663 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.663 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:31:36.663 [2024-10-01 20:30:31.862507] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:31:36.663 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.663 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:31:36.663 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:31:36.663 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:31:36.663 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:31:36.663 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:31:36.663 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:31:36.663 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.663 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:31:36.663 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.663 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:31:36.663 { 00:31:36.663 "aliases": [ 00:31:36.663 "d3f9b228-0ffd-4dd4-851b-dca5fb634d88" 00:31:36.663 ], 00:31:36.663 "assigned_rate_limits": { 00:31:36.663 "r_mbytes_per_sec": 0, 00:31:36.663 "rw_ios_per_sec": 0, 00:31:36.663 "rw_mbytes_per_sec": 0, 00:31:36.663 "w_mbytes_per_sec": 0 00:31:36.663 }, 00:31:36.663 "block_size": 512, 00:31:36.663 "claim_type": "exclusive_write", 00:31:36.663 "claimed": true, 00:31:36.663 "driver_specific": {}, 00:31:36.663 "memory_domains": [ 00:31:36.663 { 00:31:36.663 "dma_device_id": "system", 00:31:36.663 "dma_device_type": 1 00:31:36.663 }, 00:31:36.663 { 00:31:36.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:36.663 "dma_device_type": 2 00:31:36.663 } 00:31:36.663 ], 00:31:36.663 "name": "Malloc1", 00:31:36.663 "num_blocks": 1048576, 00:31:36.663 "product_name": "Malloc disk", 00:31:36.663 "supported_io_types": { 00:31:36.663 "abort": true, 00:31:36.663 "compare": false, 00:31:36.663 "compare_and_write": false, 00:31:36.663 "copy": true, 00:31:36.663 "flush": true, 00:31:36.663 "get_zone_info": false, 00:31:36.663 "nvme_admin": false, 00:31:36.663 "nvme_io": false, 00:31:36.663 "nvme_io_md": false, 00:31:36.663 "nvme_iov_md": false, 00:31:36.663 "read": true, 00:31:36.663 "reset": true, 00:31:36.663 "seek_data": false, 00:31:36.663 "seek_hole": false, 00:31:36.663 "unmap": true, 00:31:36.663 "write": true, 00:31:36.663 "write_zeroes": true, 00:31:36.663 "zcopy": true, 00:31:36.663 "zone_append": false, 00:31:36.663 "zone_management": false 00:31:36.663 }, 00:31:36.663 "uuid": "d3f9b228-0ffd-4dd4-851b-dca5fb634d88", 00:31:36.663 "zoned": false 00:31:36.663 } 00:31:36.663 ]' 00:31:36.663 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:31:36.921 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:31:36.921 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:31:36.921 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:31:36.921 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:31:36.921 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:31:36.921 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:31:36.921 20:30:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid=e1b29161-2f6e-4e88-9903-c5274a848703 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:31:36.921 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:31:36.921 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:31:36.921 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:31:36.921 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:31:36.921 20:30:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:31:39.456 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:31:39.456 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:31:39.456 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:31:39.456 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:31:39.456 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:31:39.456 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:31:39.456 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:31:39.456 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:31:39.456 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:31:39.456 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:31:39.456 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:31:39.456 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:39.456 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:31:39.456 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:31:39.456 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:31:39.456 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:31:39.456 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:31:39.456 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:31:39.456 20:30:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:31:40.389 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:31:40.389 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:31:40.389 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:40.389 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:40.389 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:31:40.389 ************************************ 00:31:40.389 START TEST filesystem_in_capsule_ext4 00:31:40.389 ************************************ 00:31:40.389 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:31:40.389 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:31:40.389 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:31:40.389 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:31:40.389 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:31:40.389 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:31:40.389 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:31:40.389 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:31:40.389 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:31:40.389 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:31:40.389 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:31:40.389 mke2fs 1.47.0 (5-Feb-2023) 00:31:40.390 Discarding device blocks: 0/522240 done 00:31:40.390 Creating filesystem with 522240 1k blocks and 130560 inodes 00:31:40.390 Filesystem UUID: 7a6e6380-65a0-49fa-b60c-2256232888bd 00:31:40.390 Superblock backups stored on blocks: 00:31:40.390 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:31:40.390 00:31:40.390 Allocating group tables: 0/64 done 00:31:40.390 Writing inode tables: 0/64 done 00:31:40.390 Creating journal (8192 blocks): done 00:31:40.390 Writing superblocks and filesystem accounting information: 0/64 done 00:31:40.390 00:31:40.390 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:31:40.390 20:30:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:31:45.650 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:31:45.650 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:31:45.650 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:31:45.650 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:31:45.650 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:31:45.650 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:31:45.650 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 71797 00:31:45.650 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:31:45.650 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:31:45.650 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:31:45.650 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:31:45.650 ************************************ 00:31:45.650 END TEST filesystem_in_capsule_ext4 00:31:45.650 ************************************ 00:31:45.650 00:31:45.650 real 0m5.532s 00:31:45.650 user 0m0.022s 00:31:45.650 sys 0m0.057s 00:31:45.650 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:45.650 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:31:45.650 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:31:45.650 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:45.650 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:45.650 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:31:45.650 ************************************ 00:31:45.650 START TEST filesystem_in_capsule_btrfs 00:31:45.650 ************************************ 00:31:45.650 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:31:45.650 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:31:45.650 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:31:45.650 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:31:45.650 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:31:45.650 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:31:45.650 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:31:45.650 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:31:45.650 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:31:45.650 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:31:45.650 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:31:45.908 btrfs-progs v6.8.1 00:31:45.908 See https://btrfs.readthedocs.io for more information. 00:31:45.908 00:31:45.908 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:31:45.908 NOTE: several default settings have changed in version 5.15, please make sure 00:31:45.908 this does not affect your deployments: 00:31:45.908 - DUP for metadata (-m dup) 00:31:45.908 - enabled no-holes (-O no-holes) 00:31:45.908 - enabled free-space-tree (-R free-space-tree) 00:31:45.908 00:31:45.908 Label: (null) 00:31:45.908 UUID: c58b1e53-e971-4c44-a285-4913ac12ccab 00:31:45.908 Node size: 16384 00:31:45.908 Sector size: 4096 (CPU page size: 4096) 00:31:45.908 Filesystem size: 510.00MiB 00:31:45.908 Block group profiles: 00:31:45.908 Data: single 8.00MiB 00:31:45.908 Metadata: DUP 32.00MiB 00:31:45.908 System: DUP 8.00MiB 00:31:45.908 SSD detected: yes 00:31:45.908 Zoned device: no 00:31:45.908 Features: extref, skinny-metadata, no-holes, free-space-tree 00:31:45.908 Checksum: crc32c 00:31:45.908 Number of devices: 1 00:31:45.908 Devices: 00:31:45.908 ID SIZE PATH 00:31:45.908 1 510.00MiB /dev/nvme0n1p1 00:31:45.908 00:31:45.908 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:31:45.908 20:30:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:31:45.908 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:31:45.908 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:31:45.908 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:31:45.908 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:31:45.908 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:31:45.908 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:31:45.908 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 71797 00:31:45.908 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:31:45.908 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:31:45.908 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:31:45.908 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:31:45.908 00:31:45.908 real 0m0.181s 00:31:45.908 user 0m0.018s 00:31:45.908 sys 0m0.061s 00:31:45.908 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:45.908 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:31:45.908 ************************************ 00:31:45.909 END TEST filesystem_in_capsule_btrfs 00:31:45.909 ************************************ 00:31:45.909 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:31:45.909 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:45.909 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:45.909 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:31:45.909 ************************************ 00:31:45.909 START TEST filesystem_in_capsule_xfs 00:31:45.909 ************************************ 00:31:45.909 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:31:45.909 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:31:45.909 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:31:45.909 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:31:45.909 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:31:45.909 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:31:45.909 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:31:45.909 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:31:45.909 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:31:45.909 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:31:45.909 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:31:45.909 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:31:45.909 = sectsz=512 attr=2, projid32bit=1 00:31:45.909 = crc=1 finobt=1, sparse=1, rmapbt=0 00:31:45.909 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:31:45.909 data = bsize=4096 blocks=130560, imaxpct=25 00:31:45.909 = sunit=0 swidth=0 blks 00:31:45.909 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:31:45.909 log =internal log bsize=4096 blocks=16384, version=2 00:31:45.909 = sectsz=512 sunit=0 blks, lazy-count=1 00:31:45.909 realtime =none extsz=4096 blocks=0, rtextents=0 00:31:46.843 Discarding blocks...Done. 00:31:46.843 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:31:46.843 20:30:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:31:48.738 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:31:48.738 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:31:48.738 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:31:48.738 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:31:48.738 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:31:48.738 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:31:48.738 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 71797 00:31:48.738 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:31:48.738 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:31:48.738 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:31:48.738 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:31:48.738 00:31:48.738 real 0m2.562s 00:31:48.738 user 0m0.021s 00:31:48.738 sys 0m0.050s 00:31:48.738 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:48.738 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:31:48.738 ************************************ 00:31:48.738 END TEST filesystem_in_capsule_xfs 00:31:48.738 ************************************ 00:31:48.738 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:31:48.738 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:31:48.738 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:48.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:48.738 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:48.739 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:31:48.739 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:48.739 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:31:48.739 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:31:48.739 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:48.739 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:31:48.739 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:48.739 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.739 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:31:48.739 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.739 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:31:48.739 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 71797 00:31:48.739 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 71797 ']' 00:31:48.739 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 71797 00:31:48.739 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:31:48.739 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:48.739 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71797 00:31:48.739 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:48.739 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:48.739 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71797' 00:31:48.739 killing process with pid 71797 00:31:48.739 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 71797 00:31:48.739 20:30:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 71797 00:31:48.996 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:31:48.996 00:31:48.996 real 0m13.536s 00:31:48.996 user 0m51.785s 00:31:48.996 sys 0m1.915s 00:31:48.996 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:48.996 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:31:48.996 ************************************ 00:31:48.996 END TEST nvmf_filesystem_in_capsule 00:31:48.996 ************************************ 00:31:48.996 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:31:48.996 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:48.996 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:31:48.996 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:48.996 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:31:48.996 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:48.996 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:48.996 rmmod nvme_tcp 00:31:49.253 rmmod nvme_fabrics 00:31:49.253 rmmod nvme_keyring 00:31:49.253 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:49.253 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:31:49.253 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:31:49.253 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:31:49.253 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:49.253 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:49.253 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:49.253 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:31:49.253 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-save 00:31:49.253 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:49.253 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-restore 00:31:49.253 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:49.253 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:31:49.254 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:31:49.254 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:31:49.254 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:31:49.254 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:31:49.254 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:31:49.254 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:31:49.254 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:31:49.254 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:31:49.254 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:31:49.254 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:31:49.254 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:31:49.254 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:49.254 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:49.512 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:31:49.512 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:49.512 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:49.512 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:49.512 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@300 -- # return 0 00:31:49.512 00:31:49.512 real 0m28.417s 00:31:49.512 user 1m43.985s 00:31:49.512 sys 0m4.390s 00:31:49.512 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:49.512 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:31:49.512 ************************************ 00:31:49.512 END TEST nvmf_filesystem 00:31:49.512 ************************************ 00:31:49.512 20:30:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:31:49.512 20:30:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:49.512 20:30:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:49.512 20:30:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:31:49.512 ************************************ 00:31:49.512 START TEST nvmf_target_discovery 00:31:49.512 ************************************ 00:31:49.512 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:31:49.512 * Looking for test storage... 00:31:49.512 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:49.512 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:49.512 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:31:49.512 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:49.771 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:49.771 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:49.771 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:49.771 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:49.771 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:31:49.771 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:31:49.771 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:31:49.771 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:31:49.771 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:31:49.771 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:31:49.771 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:31:49.771 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:49.771 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:31:49.771 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:31:49.771 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:49.771 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:49.771 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:31:49.771 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:31:49.771 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:49.771 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:31:49.771 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:31:49.771 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:31:49.771 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:31:49.771 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:49.771 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:31:49.771 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:31:49.771 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:49.771 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:49.771 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:49.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.772 --rc genhtml_branch_coverage=1 00:31:49.772 --rc genhtml_function_coverage=1 00:31:49.772 --rc genhtml_legend=1 00:31:49.772 --rc geninfo_all_blocks=1 00:31:49.772 --rc geninfo_unexecuted_blocks=1 00:31:49.772 00:31:49.772 ' 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:49.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.772 --rc genhtml_branch_coverage=1 00:31:49.772 --rc genhtml_function_coverage=1 00:31:49.772 --rc genhtml_legend=1 00:31:49.772 --rc geninfo_all_blocks=1 00:31:49.772 --rc geninfo_unexecuted_blocks=1 00:31:49.772 00:31:49.772 ' 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:49.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.772 --rc genhtml_branch_coverage=1 00:31:49.772 --rc genhtml_function_coverage=1 00:31:49.772 --rc genhtml_legend=1 00:31:49.772 --rc geninfo_all_blocks=1 00:31:49.772 --rc geninfo_unexecuted_blocks=1 00:31:49.772 00:31:49.772 ' 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:49.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.772 --rc genhtml_branch_coverage=1 00:31:49.772 --rc genhtml_function_coverage=1 00:31:49.772 --rc genhtml_legend=1 00:31:49.772 --rc geninfo_all_blocks=1 00:31:49.772 --rc geninfo_unexecuted_blocks=1 00:31:49.772 00:31:49.772 ' 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:49.772 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@458 -- # nvmf_veth_init 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:49.772 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:31:49.773 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:49.773 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:49.773 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:49.773 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:49.773 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:49.773 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:49.773 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:31:49.773 Cannot find device "nvmf_init_br" 00:31:49.773 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:31:49.773 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:31:49.773 Cannot find device "nvmf_init_br2" 00:31:49.773 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:31:49.773 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:31:49.773 Cannot find device "nvmf_tgt_br" 00:31:49.773 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # true 00:31:49.773 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:31:49.773 Cannot find device "nvmf_tgt_br2" 00:31:49.773 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # true 00:31:49.773 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:31:49.773 Cannot find device "nvmf_init_br" 00:31:49.773 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # true 00:31:49.773 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:31:49.773 Cannot find device "nvmf_init_br2" 00:31:49.773 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # true 00:31:49.773 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:31:49.773 Cannot find device "nvmf_tgt_br" 00:31:49.773 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # true 00:31:49.773 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:31:49.773 Cannot find device "nvmf_tgt_br2" 00:31:49.773 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # true 00:31:49.773 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:31:49.773 Cannot find device "nvmf_br" 00:31:49.773 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # true 00:31:49.773 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:31:49.773 Cannot find device "nvmf_init_if" 00:31:49.773 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # true 00:31:49.773 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:31:49.773 Cannot find device "nvmf_init_if2" 00:31:49.773 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # true 00:31:49.773 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:49.773 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:49.773 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # true 00:31:49.773 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:49.773 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:49.773 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # true 00:31:49.773 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:31:49.773 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:49.773 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:31:49.773 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:49.773 20:30:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:31:50.032 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:50.032 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:31:50.032 00:31:50.032 --- 10.0.0.3 ping statistics --- 00:31:50.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:50.032 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:31:50.032 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:31:50.032 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:31:50.032 00:31:50.032 --- 10.0.0.4 ping statistics --- 00:31:50.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:50.032 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:50.032 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:50.032 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:31:50.032 00:31:50.032 --- 10.0.0.1 ping statistics --- 00:31:50.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:50.032 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:31:50.032 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:50.032 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:31:50.032 00:31:50.032 --- 10.0.0.2 ping statistics --- 00:31:50.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:50.032 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@459 -- # return 0 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # nvmfpid=72382 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # waitforlisten 72382 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 72382 ']' 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:50.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:50.032 20:30:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:50.300 [2024-10-01 20:30:45.320088] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:31:50.300 [2024-10-01 20:30:45.320185] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:50.300 [2024-10-01 20:30:45.459843] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:50.300 [2024-10-01 20:30:45.532894] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:50.300 [2024-10-01 20:30:45.532968] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:50.300 [2024-10-01 20:30:45.532983] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:50.300 [2024-10-01 20:30:45.532993] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:50.300 [2024-10-01 20:30:45.533002] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:50.300 [2024-10-01 20:30:45.533203] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:31:50.300 [2024-10-01 20:30:45.533328] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:31:50.300 [2024-10-01 20:30:45.533330] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:50.300 [2024-10-01 20:30:45.533272] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.248 [2024-10-01 20:30:46.408491] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.248 Null1 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.248 [2024-10-01 20:30:46.452610] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.248 Null2 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.248 Null3 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.248 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.508 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.508 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:31:51.508 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.508 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.508 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.508 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:31:51.508 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.508 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.508 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.508 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:31:51.508 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:31:51.508 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.508 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.508 Null4 00:31:51.508 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.508 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:31:51.508 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.508 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.508 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.508 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:31:51.508 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.508 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.508 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.508 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:31:51.508 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.508 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.508 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.508 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:31:51.508 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.508 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.508 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.508 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.3 -s 4430 00:31:51.508 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.508 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.508 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.508 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid=e1b29161-2f6e-4e88-9903-c5274a848703 -t tcp -a 10.0.0.3 -s 4420 00:31:51.508 00:31:51.508 Discovery Log Number of Records 6, Generation counter 6 00:31:51.508 =====Discovery Log Entry 0====== 00:31:51.508 trtype: tcp 00:31:51.508 adrfam: ipv4 00:31:51.508 subtype: current discovery subsystem 00:31:51.508 treq: not required 00:31:51.508 portid: 0 00:31:51.508 trsvcid: 4420 00:31:51.508 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:51.508 traddr: 10.0.0.3 00:31:51.508 eflags: explicit discovery connections, duplicate discovery information 00:31:51.508 sectype: none 00:31:51.508 =====Discovery Log Entry 1====== 00:31:51.508 trtype: tcp 00:31:51.508 adrfam: ipv4 00:31:51.508 subtype: nvme subsystem 00:31:51.508 treq: not required 00:31:51.508 portid: 0 00:31:51.508 trsvcid: 4420 00:31:51.508 subnqn: nqn.2016-06.io.spdk:cnode1 00:31:51.508 traddr: 10.0.0.3 00:31:51.508 eflags: none 00:31:51.508 sectype: none 00:31:51.508 =====Discovery Log Entry 2====== 00:31:51.508 trtype: tcp 00:31:51.508 adrfam: ipv4 00:31:51.508 subtype: nvme subsystem 00:31:51.508 treq: not required 00:31:51.509 portid: 0 00:31:51.509 trsvcid: 4420 00:31:51.509 subnqn: nqn.2016-06.io.spdk:cnode2 00:31:51.509 traddr: 10.0.0.3 00:31:51.509 eflags: none 00:31:51.509 sectype: none 00:31:51.509 =====Discovery Log Entry 3====== 00:31:51.509 trtype: tcp 00:31:51.509 adrfam: ipv4 00:31:51.509 subtype: nvme subsystem 00:31:51.509 treq: not required 00:31:51.509 portid: 0 00:31:51.509 trsvcid: 4420 00:31:51.509 subnqn: nqn.2016-06.io.spdk:cnode3 00:31:51.509 traddr: 10.0.0.3 00:31:51.509 eflags: none 00:31:51.509 sectype: none 00:31:51.509 =====Discovery Log Entry 4====== 00:31:51.509 trtype: tcp 00:31:51.509 adrfam: ipv4 00:31:51.509 subtype: nvme subsystem 00:31:51.509 treq: not required 00:31:51.509 portid: 0 00:31:51.509 trsvcid: 4420 00:31:51.509 subnqn: nqn.2016-06.io.spdk:cnode4 00:31:51.509 traddr: 10.0.0.3 00:31:51.509 eflags: none 00:31:51.509 sectype: none 00:31:51.509 =====Discovery Log Entry 5====== 00:31:51.509 trtype: tcp 00:31:51.509 adrfam: ipv4 00:31:51.509 subtype: discovery subsystem referral 00:31:51.509 treq: not required 00:31:51.509 portid: 0 00:31:51.509 trsvcid: 4430 00:31:51.509 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:51.509 traddr: 10.0.0.3 00:31:51.509 eflags: none 00:31:51.509 sectype: none 00:31:51.509 Perform nvmf subsystem discovery via RPC 00:31:51.509 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:31:51.509 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:31:51.509 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.509 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.509 [ 00:31:51.509 { 00:31:51.509 "allow_any_host": true, 00:31:51.509 "hosts": [], 00:31:51.509 "listen_addresses": [ 00:31:51.509 { 00:31:51.509 "adrfam": "IPv4", 00:31:51.509 "traddr": "10.0.0.3", 00:31:51.509 "trsvcid": "4420", 00:31:51.509 "trtype": "TCP" 00:31:51.509 } 00:31:51.509 ], 00:31:51.509 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:51.509 "subtype": "Discovery" 00:31:51.509 }, 00:31:51.509 { 00:31:51.509 "allow_any_host": true, 00:31:51.509 "hosts": [], 00:31:51.509 "listen_addresses": [ 00:31:51.509 { 00:31:51.509 "adrfam": "IPv4", 00:31:51.509 "traddr": "10.0.0.3", 00:31:51.509 "trsvcid": "4420", 00:31:51.509 "trtype": "TCP" 00:31:51.509 } 00:31:51.509 ], 00:31:51.509 "max_cntlid": 65519, 00:31:51.509 "max_namespaces": 32, 00:31:51.509 "min_cntlid": 1, 00:31:51.509 "model_number": "SPDK bdev Controller", 00:31:51.509 "namespaces": [ 00:31:51.509 { 00:31:51.509 "bdev_name": "Null1", 00:31:51.509 "name": "Null1", 00:31:51.509 "nguid": "3EE155B67C35461EA553D616482839DB", 00:31:51.509 "nsid": 1, 00:31:51.509 "uuid": "3ee155b6-7c35-461e-a553-d616482839db" 00:31:51.509 } 00:31:51.509 ], 00:31:51.509 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:51.509 "serial_number": "SPDK00000000000001", 00:31:51.509 "subtype": "NVMe" 00:31:51.509 }, 00:31:51.509 { 00:31:51.509 "allow_any_host": true, 00:31:51.509 "hosts": [], 00:31:51.509 "listen_addresses": [ 00:31:51.509 { 00:31:51.509 "adrfam": "IPv4", 00:31:51.509 "traddr": "10.0.0.3", 00:31:51.509 "trsvcid": "4420", 00:31:51.509 "trtype": "TCP" 00:31:51.509 } 00:31:51.509 ], 00:31:51.509 "max_cntlid": 65519, 00:31:51.509 "max_namespaces": 32, 00:31:51.509 "min_cntlid": 1, 00:31:51.509 "model_number": "SPDK bdev Controller", 00:31:51.509 "namespaces": [ 00:31:51.509 { 00:31:51.509 "bdev_name": "Null2", 00:31:51.509 "name": "Null2", 00:31:51.509 "nguid": "8409160A5ACA49639D08583AF5CA0FC1", 00:31:51.509 "nsid": 1, 00:31:51.509 "uuid": "8409160a-5aca-4963-9d08-583af5ca0fc1" 00:31:51.509 } 00:31:51.509 ], 00:31:51.509 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:31:51.509 "serial_number": "SPDK00000000000002", 00:31:51.509 "subtype": "NVMe" 00:31:51.509 }, 00:31:51.509 { 00:31:51.509 "allow_any_host": true, 00:31:51.509 "hosts": [], 00:31:51.509 "listen_addresses": [ 00:31:51.509 { 00:31:51.509 "adrfam": "IPv4", 00:31:51.509 "traddr": "10.0.0.3", 00:31:51.509 "trsvcid": "4420", 00:31:51.509 "trtype": "TCP" 00:31:51.509 } 00:31:51.509 ], 00:31:51.509 "max_cntlid": 65519, 00:31:51.509 "max_namespaces": 32, 00:31:51.509 "min_cntlid": 1, 00:31:51.509 "model_number": "SPDK bdev Controller", 00:31:51.509 "namespaces": [ 00:31:51.509 { 00:31:51.509 "bdev_name": "Null3", 00:31:51.509 "name": "Null3", 00:31:51.509 "nguid": "AFF6655C2DA94CACB7FAC68EE8AC242F", 00:31:51.509 "nsid": 1, 00:31:51.509 "uuid": "aff6655c-2da9-4cac-b7fa-c68ee8ac242f" 00:31:51.509 } 00:31:51.509 ], 00:31:51.509 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:31:51.509 "serial_number": "SPDK00000000000003", 00:31:51.509 "subtype": "NVMe" 00:31:51.509 }, 00:31:51.509 { 00:31:51.509 "allow_any_host": true, 00:31:51.509 "hosts": [], 00:31:51.509 "listen_addresses": [ 00:31:51.509 { 00:31:51.509 "adrfam": "IPv4", 00:31:51.509 "traddr": "10.0.0.3", 00:31:51.509 "trsvcid": "4420", 00:31:51.509 "trtype": "TCP" 00:31:51.509 } 00:31:51.509 ], 00:31:51.509 "max_cntlid": 65519, 00:31:51.509 "max_namespaces": 32, 00:31:51.509 "min_cntlid": 1, 00:31:51.509 "model_number": "SPDK bdev Controller", 00:31:51.509 "namespaces": [ 00:31:51.509 { 00:31:51.509 "bdev_name": "Null4", 00:31:51.509 "name": "Null4", 00:31:51.509 "nguid": "DFCCC2E9B73F4446BDBC50EF8FB7BA94", 00:31:51.509 "nsid": 1, 00:31:51.509 "uuid": "dfccc2e9-b73f-4446-bdbc-50ef8fb7ba94" 00:31:51.509 } 00:31:51.509 ], 00:31:51.509 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:31:51.509 "serial_number": "SPDK00000000000004", 00:31:51.509 "subtype": "NVMe" 00:31:51.509 } 00:31:51.509 ] 00:31:51.509 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.509 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:31:51.509 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:31:51.509 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:51.509 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.509 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.509 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.509 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:31:51.509 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.509 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.509 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.509 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:31:51.509 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:51.509 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.509 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.768 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.768 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:31:51.768 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.768 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.768 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.768 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:31:51.768 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:31:51.768 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.768 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.768 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.768 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:31:51.768 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.768 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.768 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.768 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:31:51.768 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:31:51.768 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.768 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.768 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.768 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:31:51.768 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.768 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.769 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.769 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.3 -s 4430 00:31:51.769 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.769 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.769 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.769 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:31:51.769 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:31:51.769 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.769 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:51.769 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.769 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:31:51.769 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:31:51.769 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:31:51.769 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:31:51.769 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:51.769 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:31:51.769 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:51.769 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:31:51.769 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:51.769 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:51.769 rmmod nvme_tcp 00:31:51.769 rmmod nvme_fabrics 00:31:51.769 rmmod nvme_keyring 00:31:51.769 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:51.769 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:31:51.769 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:31:51.769 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@515 -- # '[' -n 72382 ']' 00:31:51.769 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # killprocess 72382 00:31:51.769 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 72382 ']' 00:31:51.769 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 72382 00:31:51.769 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:31:51.769 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:51.769 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72382 00:31:51.769 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:51.769 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:51.769 killing process with pid 72382 00:31:51.769 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72382' 00:31:51.769 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 72382 00:31:51.769 20:30:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 72382 00:31:52.028 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:52.028 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:52.028 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:52.028 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:31:52.028 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-save 00:31:52.028 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:52.028 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:31:52.028 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:52.028 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:31:52.028 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:31:52.028 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:31:52.028 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:31:52.286 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:31:52.286 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:31:52.286 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:31:52.286 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:31:52.286 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:31:52.286 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:31:52.286 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:31:52.286 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:31:52.286 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:52.286 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:52.286 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:31:52.286 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:52.286 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:52.286 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:52.286 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@300 -- # return 0 00:31:52.286 00:31:52.286 real 0m2.900s 00:31:52.286 user 0m7.140s 00:31:52.286 sys 0m0.725s 00:31:52.286 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:52.286 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:52.286 ************************************ 00:31:52.286 END TEST nvmf_target_discovery 00:31:52.286 ************************************ 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:31:52.545 ************************************ 00:31:52.545 START TEST nvmf_referrals 00:31:52.545 ************************************ 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:31:52.545 * Looking for test storage... 00:31:52.545 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lcov --version 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:52.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.545 --rc genhtml_branch_coverage=1 00:31:52.545 --rc genhtml_function_coverage=1 00:31:52.545 --rc genhtml_legend=1 00:31:52.545 --rc geninfo_all_blocks=1 00:31:52.545 --rc geninfo_unexecuted_blocks=1 00:31:52.545 00:31:52.545 ' 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:52.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.545 --rc genhtml_branch_coverage=1 00:31:52.545 --rc genhtml_function_coverage=1 00:31:52.545 --rc genhtml_legend=1 00:31:52.545 --rc geninfo_all_blocks=1 00:31:52.545 --rc geninfo_unexecuted_blocks=1 00:31:52.545 00:31:52.545 ' 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:52.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.545 --rc genhtml_branch_coverage=1 00:31:52.545 --rc genhtml_function_coverage=1 00:31:52.545 --rc genhtml_legend=1 00:31:52.545 --rc geninfo_all_blocks=1 00:31:52.545 --rc geninfo_unexecuted_blocks=1 00:31:52.545 00:31:52.545 ' 00:31:52.545 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:52.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.545 --rc genhtml_branch_coverage=1 00:31:52.545 --rc genhtml_function_coverage=1 00:31:52.545 --rc genhtml_legend=1 00:31:52.545 --rc geninfo_all_blocks=1 00:31:52.545 --rc geninfo_unexecuted_blocks=1 00:31:52.545 00:31:52.545 ' 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:52.546 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@458 -- # nvmf_veth_init 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:31:52.546 Cannot find device "nvmf_init_br" 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:31:52.546 Cannot find device "nvmf_init_br2" 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:31:52.546 Cannot find device "nvmf_tgt_br" 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # true 00:31:52.546 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:31:52.805 Cannot find device "nvmf_tgt_br2" 00:31:52.805 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # true 00:31:52.805 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:31:52.805 Cannot find device "nvmf_init_br" 00:31:52.805 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # true 00:31:52.805 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:31:52.805 Cannot find device "nvmf_init_br2" 00:31:52.805 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # true 00:31:52.805 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:31:52.805 Cannot find device "nvmf_tgt_br" 00:31:52.805 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # true 00:31:52.805 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:31:52.805 Cannot find device "nvmf_tgt_br2" 00:31:52.805 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # true 00:31:52.805 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:31:52.805 Cannot find device "nvmf_br" 00:31:52.805 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # true 00:31:52.805 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:31:52.805 Cannot find device "nvmf_init_if" 00:31:52.805 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # true 00:31:52.805 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:31:52.805 Cannot find device "nvmf_init_if2" 00:31:52.805 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # true 00:31:52.805 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:52.805 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:52.805 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # true 00:31:52.805 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:52.805 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:52.805 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # true 00:31:52.805 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:31:52.805 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:52.805 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:31:52.805 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:52.805 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:52.805 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:52.805 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:52.805 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:52.805 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:31:52.805 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:31:52.805 20:30:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:31:52.805 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:31:52.805 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:31:52.805 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:31:52.805 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:31:52.805 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:31:52.805 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:31:52.805 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:52.805 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:52.805 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:52.805 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:31:52.805 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:31:52.805 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:31:53.065 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:31:53.065 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:53.065 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:53.065 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:53.065 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:31:53.065 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:31:53.065 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:31:53.065 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:53.065 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:31:53.065 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:31:53.065 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:53.065 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:31:53.065 00:31:53.065 --- 10.0.0.3 ping statistics --- 00:31:53.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:53.065 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:31:53.065 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:31:53.065 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:31:53.065 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:31:53.065 00:31:53.065 --- 10.0.0.4 ping statistics --- 00:31:53.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:53.065 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:31:53.065 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:53.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:53.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:31:53.065 00:31:53.065 --- 10.0.0.1 ping statistics --- 00:31:53.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:53.065 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:31:53.065 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:31:53.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:53.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:31:53.065 00:31:53.065 --- 10.0.0.2 ping statistics --- 00:31:53.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:53.065 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:31:53.065 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:53.065 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@459 -- # return 0 00:31:53.065 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:53.065 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:53.065 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:53.065 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:53.065 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:53.065 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:53.065 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:53.065 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:31:53.065 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:53.065 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:53.065 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:53.065 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # nvmfpid=72661 00:31:53.065 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:53.065 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # waitforlisten 72661 00:31:53.065 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 72661 ']' 00:31:53.065 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:53.065 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:53.065 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:53.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:53.065 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:53.065 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:53.065 [2024-10-01 20:30:48.218313] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:31:53.065 [2024-10-01 20:30:48.218416] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:53.430 [2024-10-01 20:30:48.359265] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:53.430 [2024-10-01 20:30:48.418011] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:53.430 [2024-10-01 20:30:48.418065] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:53.430 [2024-10-01 20:30:48.418077] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:53.430 [2024-10-01 20:30:48.418085] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:53.430 [2024-10-01 20:30:48.418092] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:53.430 [2024-10-01 20:30:48.418321] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:31:53.430 [2024-10-01 20:30:48.418469] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:53.430 [2024-10-01 20:30:48.419158] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:53.430 [2024-10-01 20:30:48.419163] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:31:53.430 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:53.430 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:31:53.430 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:53.430 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:53.430 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:53.430 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:53.430 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:53.430 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.431 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:53.431 [2024-10-01 20:30:48.595320] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:53.431 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.431 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.3 -s 8009 discovery 00:31:53.431 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.431 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:53.431 [2024-10-01 20:30:48.607478] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:31:53.431 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.431 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:31:53.431 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.431 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:53.431 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.431 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:31:53.431 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.431 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:53.431 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.431 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:31:53.431 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.431 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:53.431 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.431 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:31:53.431 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.431 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:53.431 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:31:53.431 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid=e1b29161-2f6e-4e88-9903-c5274a848703 -t tcp -a 10.0.0.3 -s 8009 -o json 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid=e1b29161-2f6e-4e88-9903-c5274a848703 -t tcp -a 10.0.0.3 -s 8009 -o json 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:31:53.690 20:30:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:31:53.949 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:31:53.949 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:31:53.949 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:31:53.949 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.949 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:53.949 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.949 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:31:53.949 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.949 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:53.949 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.949 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:31:53.949 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:31:53.949 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:31:53.949 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:31:53.949 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.949 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:53.949 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:31:53.949 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.949 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:31:53.949 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:31:53.949 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:31:53.949 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:31:53.949 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:31:53.949 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:31:53.949 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid=e1b29161-2f6e-4e88-9903-c5274a848703 -t tcp -a 10.0.0.3 -s 8009 -o json 00:31:53.949 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:31:54.207 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:31:54.207 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:31:54.207 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:31:54.207 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:31:54.207 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:31:54.207 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid=e1b29161-2f6e-4e88-9903-c5274a848703 -t tcp -a 10.0.0.3 -s 8009 -o json 00:31:54.207 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:31:54.207 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:31:54.207 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:31:54.207 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:31:54.207 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:31:54.207 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid=e1b29161-2f6e-4e88-9903-c5274a848703 -t tcp -a 10.0.0.3 -s 8009 -o json 00:31:54.207 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:31:54.465 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:31:54.465 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:31:54.465 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.465 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:54.465 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.465 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:31:54.465 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:31:54.465 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:31:54.465 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:31:54.465 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.465 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:31:54.465 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:54.465 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.465 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:31:54.465 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:31:54.465 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:31:54.465 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:31:54.465 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:31:54.465 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid=e1b29161-2f6e-4e88-9903-c5274a848703 -t tcp -a 10.0.0.3 -s 8009 -o json 00:31:54.465 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:31:54.465 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:31:54.465 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:31:54.465 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:31:54.465 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:31:54.465 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:31:54.465 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:31:54.465 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid=e1b29161-2f6e-4e88-9903-c5274a848703 -t tcp -a 10.0.0.3 -s 8009 -o json 00:31:54.465 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:31:54.724 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:31:54.724 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:31:54.724 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:31:54.724 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:31:54.724 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid=e1b29161-2f6e-4e88-9903-c5274a848703 -t tcp -a 10.0.0.3 -s 8009 -o json 00:31:54.724 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:31:54.724 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:31:54.724 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:31:54.724 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.724 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:54.724 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.724 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:31:54.724 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:31:54.724 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.724 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:54.724 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.982 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:31:54.982 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:31:54.983 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:31:54.983 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:31:54.983 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid=e1b29161-2f6e-4e88-9903-c5274a848703 -t tcp -a 10.0.0.3 -s 8009 -o json 00:31:54.983 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:31:54.983 20:30:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:31:54.983 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:31:54.983 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:31:54.983 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:31:54.983 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:31:54.983 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:54.983 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:31:54.983 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:54.983 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:31:54.983 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:54.983 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:54.983 rmmod nvme_tcp 00:31:54.983 rmmod nvme_fabrics 00:31:54.983 rmmod nvme_keyring 00:31:55.241 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:55.241 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:31:55.241 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:31:55.241 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@515 -- # '[' -n 72661 ']' 00:31:55.241 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # killprocess 72661 00:31:55.241 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 72661 ']' 00:31:55.241 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 72661 00:31:55.241 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:31:55.242 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:55.242 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72661 00:31:55.242 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:55.242 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:55.242 killing process with pid 72661 00:31:55.242 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72661' 00:31:55.242 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 72661 00:31:55.242 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 72661 00:31:55.242 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:55.242 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:55.242 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:55.242 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:31:55.501 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-save 00:31:55.501 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:55.501 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-restore 00:31:55.501 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:55.501 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:31:55.501 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:31:55.501 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:31:55.501 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:31:55.501 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:31:55.501 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:31:55.501 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:31:55.501 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:31:55.501 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:31:55.501 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:31:55.501 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:31:55.501 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:31:55.501 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:55.501 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:55.501 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@246 -- # remove_spdk_ns 00:31:55.501 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:55.501 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:55.501 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.501 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@300 -- # return 0 00:31:55.501 00:31:55.501 real 0m3.196s 00:31:55.501 user 0m9.198s 00:31:55.501 sys 0m0.899s 00:31:55.501 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:55.501 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:55.501 ************************************ 00:31:55.501 END TEST nvmf_referrals 00:31:55.501 ************************************ 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:31:55.761 ************************************ 00:31:55.761 START TEST nvmf_connect_disconnect 00:31:55.761 ************************************ 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:31:55.761 * Looking for test storage... 00:31:55.761 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:55.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.761 --rc genhtml_branch_coverage=1 00:31:55.761 --rc genhtml_function_coverage=1 00:31:55.761 --rc genhtml_legend=1 00:31:55.761 --rc geninfo_all_blocks=1 00:31:55.761 --rc geninfo_unexecuted_blocks=1 00:31:55.761 00:31:55.761 ' 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:55.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.761 --rc genhtml_branch_coverage=1 00:31:55.761 --rc genhtml_function_coverage=1 00:31:55.761 --rc genhtml_legend=1 00:31:55.761 --rc geninfo_all_blocks=1 00:31:55.761 --rc geninfo_unexecuted_blocks=1 00:31:55.761 00:31:55.761 ' 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:55.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.761 --rc genhtml_branch_coverage=1 00:31:55.761 --rc genhtml_function_coverage=1 00:31:55.761 --rc genhtml_legend=1 00:31:55.761 --rc geninfo_all_blocks=1 00:31:55.761 --rc geninfo_unexecuted_blocks=1 00:31:55.761 00:31:55.761 ' 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:55.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.761 --rc genhtml_branch_coverage=1 00:31:55.761 --rc genhtml_function_coverage=1 00:31:55.761 --rc genhtml_legend=1 00:31:55.761 --rc geninfo_all_blocks=1 00:31:55.761 --rc geninfo_unexecuted_blocks=1 00:31:55.761 00:31:55.761 ' 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.761 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:55.762 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # nvmf_veth_init 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:55.762 20:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:31:55.762 Cannot find device "nvmf_init_br" 00:31:55.762 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:31:55.762 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:31:56.022 Cannot find device "nvmf_init_br2" 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:31:56.022 Cannot find device "nvmf_tgt_br" 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # true 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:31:56.022 Cannot find device "nvmf_tgt_br2" 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # true 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:31:56.022 Cannot find device "nvmf_init_br" 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # true 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:31:56.022 Cannot find device "nvmf_init_br2" 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # true 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:31:56.022 Cannot find device "nvmf_tgt_br" 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # true 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:31:56.022 Cannot find device "nvmf_tgt_br2" 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # true 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:31:56.022 Cannot find device "nvmf_br" 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # true 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:31:56.022 Cannot find device "nvmf_init_if" 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # true 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:31:56.022 Cannot find device "nvmf_init_if2" 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # true 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:56.022 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # true 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:56.022 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # true 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:31:56.022 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:56.280 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:56.280 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:56.280 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:31:56.280 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:31:56.280 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:31:56.281 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:56.281 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:31:56.281 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:31:56.281 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:56.281 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:31:56.281 00:31:56.281 --- 10.0.0.3 ping statistics --- 00:31:56.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.281 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:31:56.281 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:31:56.281 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:31:56.281 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:31:56.281 00:31:56.281 --- 10.0.0.4 ping statistics --- 00:31:56.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.281 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:31:56.281 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:56.281 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:56.281 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:31:56.281 00:31:56.281 --- 10.0.0.1 ping statistics --- 00:31:56.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.281 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:31:56.281 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:31:56.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:56.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:31:56.281 00:31:56.281 --- 10.0.0.2 ping statistics --- 00:31:56.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.281 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:31:56.281 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:56.281 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # return 0 00:31:56.281 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:56.281 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:56.281 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:56.281 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:56.281 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:56.281 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:56.281 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:56.281 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:31:56.281 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:56.281 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:56.281 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:56.281 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # nvmfpid=73008 00:31:56.281 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:56.281 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # waitforlisten 73008 00:31:56.281 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 73008 ']' 00:31:56.281 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:56.281 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:56.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:56.281 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:56.281 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:56.281 20:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:56.281 [2024-10-01 20:30:51.418058] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:31:56.281 [2024-10-01 20:30:51.418164] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:56.539 [2024-10-01 20:30:51.560488] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:56.539 [2024-10-01 20:30:51.630857] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:56.539 [2024-10-01 20:30:51.630918] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:56.539 [2024-10-01 20:30:51.630932] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:56.539 [2024-10-01 20:30:51.630943] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:56.539 [2024-10-01 20:30:51.630952] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:56.539 [2024-10-01 20:30:51.631294] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:31:56.539 [2024-10-01 20:30:51.631360] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:56.539 [2024-10-01 20:30:51.631477] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:31:56.539 [2024-10-01 20:30:51.631488] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:57.474 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:57.474 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:31:57.474 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:57.474 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:57.474 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:57.474 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:57.474 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:31:57.474 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.474 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:57.474 [2024-10-01 20:30:52.474889] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:57.474 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.474 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:31:57.474 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.474 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:57.474 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.474 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:31:57.474 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:57.474 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.474 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:57.474 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.474 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:57.474 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.474 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:57.474 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.474 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:31:57.474 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.474 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:57.474 [2024-10-01 20:30:52.525538] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:31:57.474 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.474 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:31:57.474 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:31:57.474 20:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:31:59.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:01.897 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:04.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:06.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:08.869 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:08.869 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:32:08.869 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:32:08.869 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:08.869 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:32:08.869 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:08.869 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:32:08.869 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:08.869 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:08.869 rmmod nvme_tcp 00:32:08.869 rmmod nvme_fabrics 00:32:08.869 rmmod nvme_keyring 00:32:08.869 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:08.869 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:32:08.869 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:32:08.869 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@515 -- # '[' -n 73008 ']' 00:32:08.869 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # killprocess 73008 00:32:08.869 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 73008 ']' 00:32:08.869 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 73008 00:32:08.869 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:32:08.869 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:08.869 20:31:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73008 00:32:08.869 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:08.869 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:08.869 killing process with pid 73008 00:32:08.869 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73008' 00:32:08.869 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 73008 00:32:08.869 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 73008 00:32:09.129 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:09.129 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:09.129 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:09.129 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:32:09.129 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:32:09.129 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:09.129 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:32:09.129 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:09.129 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:32:09.129 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:32:09.129 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:32:09.129 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:32:09.129 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:32:09.129 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:32:09.129 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:32:09.129 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:32:09.129 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:32:09.129 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:32:09.129 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:32:09.389 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:32:09.389 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:09.389 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:09.389 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@246 -- # remove_spdk_ns 00:32:09.389 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:09.389 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:09.389 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:09.389 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@300 -- # return 0 00:32:09.389 00:32:09.389 real 0m13.703s 00:32:09.389 user 0m49.587s 00:32:09.389 sys 0m1.975s 00:32:09.389 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:09.389 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:09.389 ************************************ 00:32:09.389 END TEST nvmf_connect_disconnect 00:32:09.389 ************************************ 00:32:09.389 20:31:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:32:09.389 20:31:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:09.389 20:31:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:09.389 20:31:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:32:09.389 ************************************ 00:32:09.389 START TEST nvmf_multitarget 00:32:09.389 ************************************ 00:32:09.389 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:32:09.389 * Looking for test storage... 00:32:09.389 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:32:09.389 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:09.389 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lcov --version 00:32:09.389 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:09.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.649 --rc genhtml_branch_coverage=1 00:32:09.649 --rc genhtml_function_coverage=1 00:32:09.649 --rc genhtml_legend=1 00:32:09.649 --rc geninfo_all_blocks=1 00:32:09.649 --rc geninfo_unexecuted_blocks=1 00:32:09.649 00:32:09.649 ' 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:09.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.649 --rc genhtml_branch_coverage=1 00:32:09.649 --rc genhtml_function_coverage=1 00:32:09.649 --rc genhtml_legend=1 00:32:09.649 --rc geninfo_all_blocks=1 00:32:09.649 --rc geninfo_unexecuted_blocks=1 00:32:09.649 00:32:09.649 ' 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:09.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.649 --rc genhtml_branch_coverage=1 00:32:09.649 --rc genhtml_function_coverage=1 00:32:09.649 --rc genhtml_legend=1 00:32:09.649 --rc geninfo_all_blocks=1 00:32:09.649 --rc geninfo_unexecuted_blocks=1 00:32:09.649 00:32:09.649 ' 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:09.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.649 --rc genhtml_branch_coverage=1 00:32:09.649 --rc genhtml_function_coverage=1 00:32:09.649 --rc genhtml_legend=1 00:32:09.649 --rc geninfo_all_blocks=1 00:32:09.649 --rc geninfo_unexecuted_blocks=1 00:32:09.649 00:32:09.649 ' 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:09.649 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:32:09.649 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@458 -- # nvmf_veth_init 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:32:09.650 Cannot find device "nvmf_init_br" 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:32:09.650 Cannot find device "nvmf_init_br2" 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:32:09.650 Cannot find device "nvmf_tgt_br" 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # true 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:32:09.650 Cannot find device "nvmf_tgt_br2" 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # true 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:32:09.650 Cannot find device "nvmf_init_br" 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # true 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:32:09.650 Cannot find device "nvmf_init_br2" 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # true 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:32:09.650 Cannot find device "nvmf_tgt_br" 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # true 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:32:09.650 Cannot find device "nvmf_tgt_br2" 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # true 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:32:09.650 Cannot find device "nvmf_br" 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # true 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:32:09.650 Cannot find device "nvmf_init_if" 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # true 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:32:09.650 Cannot find device "nvmf_init_if2" 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # true 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:09.650 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # true 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:09.650 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # true 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:09.650 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:09.909 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:09.909 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:09.909 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:09.909 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:32:09.909 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:32:09.909 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:32:09.909 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:32:09.909 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:32:09.909 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:32:09.909 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:32:09.909 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:32:09.909 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:32:09.909 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:09.909 20:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:09.909 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:09.909 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:32:09.909 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:32:09.909 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:32:09.910 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:32:09.910 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:09.910 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:09.910 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:09.910 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:32:09.910 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:32:09.910 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:32:09.910 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:09.910 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:32:09.910 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:32:09.910 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:09.910 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:32:09.910 00:32:09.910 --- 10.0.0.3 ping statistics --- 00:32:09.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:09.910 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:32:09.910 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:32:09.910 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:32:09.910 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:32:09.910 00:32:09.910 --- 10.0.0.4 ping statistics --- 00:32:09.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:09.910 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:32:09.910 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:09.910 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:09.910 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:32:09.910 00:32:09.910 --- 10.0.0.1 ping statistics --- 00:32:09.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:09.910 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:32:09.910 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:32:09.910 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:09.910 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:32:09.910 00:32:09.910 --- 10.0.0.2 ping statistics --- 00:32:09.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:09.910 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:32:09.910 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:09.910 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@459 -- # return 0 00:32:09.910 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:09.910 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:09.910 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:09.910 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:09.910 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:09.910 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:09.910 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:09.910 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:32:09.910 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:09.910 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:09.910 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:32:09.910 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # nvmfpid=73474 00:32:09.910 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:09.910 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # waitforlisten 73474 00:32:09.910 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 73474 ']' 00:32:09.910 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:09.910 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:09.910 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:09.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:09.910 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:09.910 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:32:10.169 [2024-10-01 20:31:05.186541] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:32:10.169 [2024-10-01 20:31:05.186632] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:10.169 [2024-10-01 20:31:05.323353] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:10.169 [2024-10-01 20:31:05.381051] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:10.169 [2024-10-01 20:31:05.381101] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:10.169 [2024-10-01 20:31:05.381125] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:10.169 [2024-10-01 20:31:05.381134] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:10.169 [2024-10-01 20:31:05.381141] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:10.169 [2024-10-01 20:31:05.381218] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:10.169 [2024-10-01 20:31:05.381296] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:32:10.169 [2024-10-01 20:31:05.381908] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:32:10.169 [2024-10-01 20:31:05.381938] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:10.428 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:10.428 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:32:10.428 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:10.428 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:10.428 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:32:10.428 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:10.428 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:32:10.428 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:32:10.428 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:32:10.428 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:32:10.428 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:32:10.687 "nvmf_tgt_1" 00:32:10.687 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:32:10.946 "nvmf_tgt_2" 00:32:10.946 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:32:10.946 20:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:32:10.946 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:32:10.946 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:32:11.204 true 00:32:11.204 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:32:11.204 true 00:32:11.204 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:32:11.204 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:32:11.463 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:32:11.463 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:32:11.463 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:32:11.463 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:11.463 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:32:11.463 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:11.463 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:32:11.463 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:11.463 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:11.463 rmmod nvme_tcp 00:32:11.463 rmmod nvme_fabrics 00:32:11.463 rmmod nvme_keyring 00:32:11.463 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:11.464 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:32:11.464 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:32:11.464 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@515 -- # '[' -n 73474 ']' 00:32:11.464 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # killprocess 73474 00:32:11.464 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 73474 ']' 00:32:11.464 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 73474 00:32:11.464 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:32:11.464 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:11.464 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73474 00:32:11.464 killing process with pid 73474 00:32:11.464 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:11.464 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:11.464 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73474' 00:32:11.464 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 73474 00:32:11.464 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 73474 00:32:11.745 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:11.745 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:11.745 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:11.745 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:32:11.745 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:11.745 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-save 00:32:11.745 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-restore 00:32:11.745 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:11.745 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:32:11.745 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:32:11.745 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:32:11.745 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:32:11.745 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:32:11.745 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:32:11.745 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:32:11.745 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:32:11.745 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:32:11.745 20:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:32:12.003 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:32:12.003 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:32:12.003 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:12.003 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:12.003 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@246 -- # remove_spdk_ns 00:32:12.003 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:12.003 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:12.003 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:12.003 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@300 -- # return 0 00:32:12.003 00:32:12.003 real 0m2.599s 00:32:12.003 user 0m7.142s 00:32:12.003 sys 0m0.730s 00:32:12.003 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:12.003 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:32:12.003 ************************************ 00:32:12.003 END TEST nvmf_multitarget 00:32:12.003 ************************************ 00:32:12.003 20:31:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:32:12.003 20:31:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:12.003 20:31:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:12.003 20:31:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:32:12.003 ************************************ 00:32:12.003 START TEST nvmf_rpc 00:32:12.003 ************************************ 00:32:12.003 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:32:12.003 * Looking for test storage... 00:32:12.264 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:12.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:12.264 --rc genhtml_branch_coverage=1 00:32:12.264 --rc genhtml_function_coverage=1 00:32:12.264 --rc genhtml_legend=1 00:32:12.264 --rc geninfo_all_blocks=1 00:32:12.264 --rc geninfo_unexecuted_blocks=1 00:32:12.264 00:32:12.264 ' 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:12.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:12.264 --rc genhtml_branch_coverage=1 00:32:12.264 --rc genhtml_function_coverage=1 00:32:12.264 --rc genhtml_legend=1 00:32:12.264 --rc geninfo_all_blocks=1 00:32:12.264 --rc geninfo_unexecuted_blocks=1 00:32:12.264 00:32:12.264 ' 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:12.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:12.264 --rc genhtml_branch_coverage=1 00:32:12.264 --rc genhtml_function_coverage=1 00:32:12.264 --rc genhtml_legend=1 00:32:12.264 --rc geninfo_all_blocks=1 00:32:12.264 --rc geninfo_unexecuted_blocks=1 00:32:12.264 00:32:12.264 ' 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:12.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:12.264 --rc genhtml_branch_coverage=1 00:32:12.264 --rc genhtml_function_coverage=1 00:32:12.264 --rc genhtml_legend=1 00:32:12.264 --rc geninfo_all_blocks=1 00:32:12.264 --rc geninfo_unexecuted_blocks=1 00:32:12.264 00:32:12.264 ' 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:12.264 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:12.265 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@458 -- # nvmf_veth_init 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:32:12.265 Cannot find device "nvmf_init_br" 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:32:12.265 Cannot find device "nvmf_init_br2" 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:32:12.265 Cannot find device "nvmf_tgt_br" 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # true 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:32:12.265 Cannot find device "nvmf_tgt_br2" 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # true 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:32:12.265 Cannot find device "nvmf_init_br" 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # true 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:32:12.265 Cannot find device "nvmf_init_br2" 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # true 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:32:12.265 Cannot find device "nvmf_tgt_br" 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # true 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:32:12.265 Cannot find device "nvmf_tgt_br2" 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # true 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:32:12.265 Cannot find device "nvmf_br" 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # true 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:32:12.265 Cannot find device "nvmf_init_if" 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # true 00:32:12.265 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:32:12.524 Cannot find device "nvmf_init_if2" 00:32:12.524 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # true 00:32:12.524 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:12.524 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:12.524 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # true 00:32:12.524 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:12.524 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:12.524 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # true 00:32:12.524 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:32:12.524 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:12.524 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:32:12.524 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:12.524 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:12.524 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:12.524 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:12.524 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:12.524 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:32:12.524 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:32:12.524 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:32:12.524 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:32:12.524 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:32:12.524 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:32:12.524 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:32:12.524 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:32:12.524 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:32:12.524 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:12.524 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:12.524 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:12.524 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:32:12.524 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:32:12.524 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:32:12.524 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:32:12.524 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:12.524 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:12.783 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:12.783 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:32:12.783 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:32:12.783 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:32:12.783 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:12.783 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:32:12.783 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:32:12.783 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:12.783 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:32:12.783 00:32:12.783 --- 10.0.0.3 ping statistics --- 00:32:12.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:12.783 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:32:12.783 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:32:12.783 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:32:12.783 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:32:12.783 00:32:12.783 --- 10.0.0.4 ping statistics --- 00:32:12.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:12.783 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:32:12.783 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:12.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:12.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:32:12.783 00:32:12.783 --- 10.0.0.1 ping statistics --- 00:32:12.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:12.783 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:32:12.783 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:32:12.783 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:12.783 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:32:12.783 00:32:12.783 --- 10.0.0.2 ping statistics --- 00:32:12.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:12.783 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:32:12.783 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:12.783 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@459 -- # return 0 00:32:12.783 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:12.783 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:12.783 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:12.783 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:12.783 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:12.783 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:12.783 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:12.783 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:32:12.783 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:12.783 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:12.783 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:12.783 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # nvmfpid=73742 00:32:12.783 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:12.783 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # waitforlisten 73742 00:32:12.783 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 73742 ']' 00:32:12.784 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:12.784 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:12.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:12.784 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:12.784 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:12.784 20:31:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:12.784 [2024-10-01 20:31:07.893759] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:32:12.784 [2024-10-01 20:31:07.893854] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:13.042 [2024-10-01 20:31:08.036415] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:13.042 [2024-10-01 20:31:08.105716] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:13.042 [2024-10-01 20:31:08.105786] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:13.043 [2024-10-01 20:31:08.105800] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:13.043 [2024-10-01 20:31:08.105810] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:13.043 [2024-10-01 20:31:08.105819] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:13.043 [2024-10-01 20:31:08.105979] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:13.043 [2024-10-01 20:31:08.106141] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:13.043 [2024-10-01 20:31:08.106657] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:32:13.043 [2024-10-01 20:31:08.106035] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:32:13.043 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:13.043 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:32:13.043 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:13.043 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:13.043 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:13.301 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:13.301 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:32:13.301 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.301 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:13.301 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.301 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:32:13.301 "poll_groups": [ 00:32:13.301 { 00:32:13.301 "admin_qpairs": 0, 00:32:13.301 "completed_nvme_io": 0, 00:32:13.301 "current_admin_qpairs": 0, 00:32:13.301 "current_io_qpairs": 0, 00:32:13.301 "io_qpairs": 0, 00:32:13.301 "name": "nvmf_tgt_poll_group_000", 00:32:13.301 "pending_bdev_io": 0, 00:32:13.301 "transports": [] 00:32:13.301 }, 00:32:13.301 { 00:32:13.301 "admin_qpairs": 0, 00:32:13.301 "completed_nvme_io": 0, 00:32:13.301 "current_admin_qpairs": 0, 00:32:13.301 "current_io_qpairs": 0, 00:32:13.301 "io_qpairs": 0, 00:32:13.301 "name": "nvmf_tgt_poll_group_001", 00:32:13.301 "pending_bdev_io": 0, 00:32:13.301 "transports": [] 00:32:13.301 }, 00:32:13.301 { 00:32:13.301 "admin_qpairs": 0, 00:32:13.301 "completed_nvme_io": 0, 00:32:13.301 "current_admin_qpairs": 0, 00:32:13.301 "current_io_qpairs": 0, 00:32:13.301 "io_qpairs": 0, 00:32:13.301 "name": "nvmf_tgt_poll_group_002", 00:32:13.301 "pending_bdev_io": 0, 00:32:13.301 "transports": [] 00:32:13.301 }, 00:32:13.301 { 00:32:13.301 "admin_qpairs": 0, 00:32:13.301 "completed_nvme_io": 0, 00:32:13.301 "current_admin_qpairs": 0, 00:32:13.301 "current_io_qpairs": 0, 00:32:13.301 "io_qpairs": 0, 00:32:13.301 "name": "nvmf_tgt_poll_group_003", 00:32:13.301 "pending_bdev_io": 0, 00:32:13.301 "transports": [] 00:32:13.301 } 00:32:13.301 ], 00:32:13.301 "tick_rate": 2200000000 00:32:13.301 }' 00:32:13.301 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:32:13.301 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:32:13.301 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:32:13.301 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:32:13.301 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:32:13.301 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:32:13.301 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:32:13.301 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:13.301 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.301 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:13.301 [2024-10-01 20:31:08.446142] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:13.301 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.301 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:32:13.301 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.301 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:13.301 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.301 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:32:13.301 "poll_groups": [ 00:32:13.301 { 00:32:13.301 "admin_qpairs": 0, 00:32:13.301 "completed_nvme_io": 0, 00:32:13.301 "current_admin_qpairs": 0, 00:32:13.301 "current_io_qpairs": 0, 00:32:13.301 "io_qpairs": 0, 00:32:13.301 "name": "nvmf_tgt_poll_group_000", 00:32:13.301 "pending_bdev_io": 0, 00:32:13.301 "transports": [ 00:32:13.301 { 00:32:13.301 "trtype": "TCP" 00:32:13.301 } 00:32:13.301 ] 00:32:13.301 }, 00:32:13.301 { 00:32:13.301 "admin_qpairs": 0, 00:32:13.301 "completed_nvme_io": 0, 00:32:13.301 "current_admin_qpairs": 0, 00:32:13.301 "current_io_qpairs": 0, 00:32:13.301 "io_qpairs": 0, 00:32:13.301 "name": "nvmf_tgt_poll_group_001", 00:32:13.301 "pending_bdev_io": 0, 00:32:13.301 "transports": [ 00:32:13.301 { 00:32:13.301 "trtype": "TCP" 00:32:13.301 } 00:32:13.301 ] 00:32:13.301 }, 00:32:13.301 { 00:32:13.301 "admin_qpairs": 0, 00:32:13.301 "completed_nvme_io": 0, 00:32:13.301 "current_admin_qpairs": 0, 00:32:13.301 "current_io_qpairs": 0, 00:32:13.301 "io_qpairs": 0, 00:32:13.301 "name": "nvmf_tgt_poll_group_002", 00:32:13.301 "pending_bdev_io": 0, 00:32:13.301 "transports": [ 00:32:13.301 { 00:32:13.301 "trtype": "TCP" 00:32:13.301 } 00:32:13.301 ] 00:32:13.301 }, 00:32:13.301 { 00:32:13.301 "admin_qpairs": 0, 00:32:13.301 "completed_nvme_io": 0, 00:32:13.301 "current_admin_qpairs": 0, 00:32:13.301 "current_io_qpairs": 0, 00:32:13.301 "io_qpairs": 0, 00:32:13.301 "name": "nvmf_tgt_poll_group_003", 00:32:13.301 "pending_bdev_io": 0, 00:32:13.301 "transports": [ 00:32:13.301 { 00:32:13.301 "trtype": "TCP" 00:32:13.301 } 00:32:13.301 ] 00:32:13.301 } 00:32:13.301 ], 00:32:13.301 "tick_rate": 2200000000 00:32:13.301 }' 00:32:13.301 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:32:13.301 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:32:13.301 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:32:13.301 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:32:13.301 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:32:13.301 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:32:13.301 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:32:13.302 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:32:13.302 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:32:13.560 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:32:13.560 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:32:13.560 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:32:13.560 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:32:13.560 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:32:13.560 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.560 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:13.560 Malloc1 00:32:13.560 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.560 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:13.560 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.560 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:13.560 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.560 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:13.560 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.560 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:13.560 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.560 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:32:13.560 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.560 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:13.560 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.560 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:32:13.560 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.560 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:13.560 [2024-10-01 20:31:08.632912] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:13.561 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.561 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid=e1b29161-2f6e-4e88-9903-c5274a848703 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -a 10.0.0.3 -s 4420 00:32:13.561 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:32:13.561 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid=e1b29161-2f6e-4e88-9903-c5274a848703 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -a 10.0.0.3 -s 4420 00:32:13.561 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:32:13.561 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:13.561 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:32:13.561 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:13.561 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:32:13.561 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:13.561 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:32:13.561 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:32:13.561 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid=e1b29161-2f6e-4e88-9903-c5274a848703 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -a 10.0.0.3 -s 4420 00:32:13.561 [2024-10-01 20:31:08.661318] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703' 00:32:13.561 Failed to write to /dev/nvme-fabrics: Input/output error 00:32:13.561 could not add new controller: failed to write to nvme-fabrics device 00:32:13.561 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:32:13.561 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:13.561 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:13.561 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:13.561 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:32:13.561 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.561 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:13.561 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.561 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid=e1b29161-2f6e-4e88-9903-c5274a848703 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:32:13.819 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:32:13.819 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:32:13.819 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:32:13.819 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:32:13.819 20:31:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:32:15.720 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:32:15.720 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:32:15.720 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:32:15.720 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:32:15.720 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:32:15.720 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:32:15.720 20:31:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:15.981 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:15.981 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:15.981 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:32:15.981 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:32:15.981 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:15.981 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:32:15.981 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:15.981 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:32:15.981 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:32:15.981 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.981 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:15.981 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.981 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid=e1b29161-2f6e-4e88-9903-c5274a848703 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:32:15.981 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:32:15.981 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid=e1b29161-2f6e-4e88-9903-c5274a848703 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:32:15.981 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:32:15.981 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:15.981 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:32:15.981 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:15.981 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:32:15.981 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:15.981 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:32:15.981 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:32:15.981 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid=e1b29161-2f6e-4e88-9903-c5274a848703 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:32:15.981 [2024-10-01 20:31:11.062389] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703' 00:32:15.981 Failed to write to /dev/nvme-fabrics: Input/output error 00:32:15.981 could not add new controller: failed to write to nvme-fabrics device 00:32:15.981 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:32:15.981 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:15.981 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:15.981 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:15.981 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:32:15.981 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.981 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:15.981 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.981 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid=e1b29161-2f6e-4e88-9903-c5274a848703 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:32:16.240 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:32:16.240 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:32:16.240 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:32:16.240 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:32:16.240 20:31:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:32:18.143 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:32:18.143 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:32:18.143 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:32:18.143 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:32:18.143 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:32:18.143 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:32:18.143 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:18.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:18.143 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:18.143 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:32:18.143 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:18.143 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:32:18.143 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:32:18.143 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:18.143 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:32:18.143 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:18.143 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.143 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:18.143 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.143 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:32:18.143 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:32:18.143 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:32:18.143 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.143 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:18.143 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.143 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:32:18.143 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.143 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:18.143 [2024-10-01 20:31:13.351855] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:18.143 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.143 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:32:18.143 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.143 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:18.143 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.143 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:32:18.143 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.143 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:18.143 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.143 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid=e1b29161-2f6e-4e88-9903-c5274a848703 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:32:18.401 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:32:18.401 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:32:18.401 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:32:18.401 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:32:18.401 20:31:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:32:20.330 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:32:20.330 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:32:20.330 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:32:20.330 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:32:20.330 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:32:20.330 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:32:20.330 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:20.587 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:20.587 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:20.588 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:32:20.588 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:20.588 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:32:20.588 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:32:20.588 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:20.588 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:32:20.588 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:20.588 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.588 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:20.588 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.588 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:20.588 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.588 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:20.588 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.588 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:32:20.588 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:32:20.588 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.588 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:20.588 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.588 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:32:20.588 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.588 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:20.588 [2024-10-01 20:31:15.646818] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:20.588 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.588 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:32:20.588 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.588 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:20.588 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.588 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:32:20.588 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.588 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:20.588 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.588 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid=e1b29161-2f6e-4e88-9903-c5274a848703 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:32:20.588 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:32:20.588 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:32:20.588 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:32:20.588 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:32:20.588 20:31:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:32:23.114 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:32:23.114 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:32:23.114 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:32:23.114 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:32:23.114 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:32:23.114 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:32:23.114 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:23.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:23.114 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:23.114 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:32:23.114 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:32:23.114 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:23.114 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:32:23.114 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:23.114 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:32:23.114 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:23.114 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.114 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:23.114 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.114 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:23.114 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.114 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:23.114 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.114 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:32:23.114 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:32:23.114 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.114 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:23.114 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.114 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:32:23.114 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.114 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:23.114 [2024-10-01 20:31:17.943165] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:23.114 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.114 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:32:23.114 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.114 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:23.114 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.114 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:32:23.114 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.114 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:23.114 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.114 20:31:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid=e1b29161-2f6e-4e88-9903-c5274a848703 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:32:23.115 20:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:32:23.115 20:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:32:23.115 20:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:32:23.115 20:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:32:23.115 20:31:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:32:25.010 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:32:25.010 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:32:25.010 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:32:25.010 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:32:25.010 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:32:25.010 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:32:25.010 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:25.285 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:25.285 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:25.285 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:32:25.285 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:32:25.285 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:25.285 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:32:25.285 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:25.285 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:32:25.285 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:25.285 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.285 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:25.285 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.285 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:25.285 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.285 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:25.285 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.285 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:32:25.285 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:32:25.285 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.285 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:25.285 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.285 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:32:25.285 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.285 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:25.285 [2024-10-01 20:31:20.338682] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:25.285 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.285 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:32:25.285 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.285 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:25.285 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.285 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:32:25.285 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.285 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:25.285 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.285 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid=e1b29161-2f6e-4e88-9903-c5274a848703 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:32:25.549 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:32:25.549 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:32:25.549 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:32:25.549 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:32:25.549 20:31:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:32:27.450 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:32:27.450 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:32:27.450 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:32:27.450 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:32:27.450 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:32:27.450 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:32:27.450 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:27.450 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:27.450 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:27.450 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:32:27.450 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:32:27.450 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:27.450 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:32:27.450 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:27.450 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:32:27.450 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:27.450 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.450 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:27.451 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.451 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:27.451 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.451 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:27.451 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.451 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:32:27.451 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:32:27.451 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.451 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:27.451 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.451 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:32:27.451 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.451 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:27.451 [2024-10-01 20:31:22.642251] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:27.451 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.451 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:32:27.451 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.451 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:27.451 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.451 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:32:27.451 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.451 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:27.451 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.451 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid=e1b29161-2f6e-4e88-9903-c5274a848703 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:32:27.709 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:32:27.709 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:32:27.709 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:32:27.709 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:32:27.710 20:31:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:32:29.609 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:32:29.609 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:32:29.609 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:32:29.609 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:32:29.609 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:32:29.609 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:32:29.609 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:29.868 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:29.868 [2024-10-01 20:31:24.949795] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.868 20:31:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:29.868 [2024-10-01 20:31:24.997721] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:29.868 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.868 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:29.868 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.868 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:29.868 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.868 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:32:29.868 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.868 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:29.868 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.868 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:29.868 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.868 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:29.868 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.868 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:29.869 [2024-10-01 20:31:25.045767] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:29.869 [2024-10-01 20:31:25.093852] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.869 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:30.127 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.127 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:30.127 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.127 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:30.127 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.127 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:32:30.127 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:32:30.127 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.127 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:30.127 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.127 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:32:30.127 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.127 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:30.127 [2024-10-01 20:31:25.141902] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:30.127 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.127 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:30.127 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.127 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:30.127 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.127 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:32:30.127 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.127 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:30.127 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.127 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:30.127 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.127 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:30.127 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.127 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:30.127 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.127 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:30.127 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.127 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:32:30.127 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.127 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:30.127 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.127 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:32:30.127 "poll_groups": [ 00:32:30.127 { 00:32:30.127 "admin_qpairs": 2, 00:32:30.127 "completed_nvme_io": 69, 00:32:30.127 "current_admin_qpairs": 0, 00:32:30.127 "current_io_qpairs": 0, 00:32:30.127 "io_qpairs": 16, 00:32:30.127 "name": "nvmf_tgt_poll_group_000", 00:32:30.127 "pending_bdev_io": 0, 00:32:30.127 "transports": [ 00:32:30.127 { 00:32:30.127 "trtype": "TCP" 00:32:30.127 } 00:32:30.127 ] 00:32:30.127 }, 00:32:30.127 { 00:32:30.127 "admin_qpairs": 3, 00:32:30.127 "completed_nvme_io": 67, 00:32:30.127 "current_admin_qpairs": 0, 00:32:30.127 "current_io_qpairs": 0, 00:32:30.127 "io_qpairs": 17, 00:32:30.127 "name": "nvmf_tgt_poll_group_001", 00:32:30.127 "pending_bdev_io": 0, 00:32:30.127 "transports": [ 00:32:30.127 { 00:32:30.127 "trtype": "TCP" 00:32:30.127 } 00:32:30.127 ] 00:32:30.127 }, 00:32:30.127 { 00:32:30.127 "admin_qpairs": 1, 00:32:30.127 "completed_nvme_io": 118, 00:32:30.127 "current_admin_qpairs": 0, 00:32:30.127 "current_io_qpairs": 0, 00:32:30.127 "io_qpairs": 19, 00:32:30.127 "name": "nvmf_tgt_poll_group_002", 00:32:30.127 "pending_bdev_io": 0, 00:32:30.127 "transports": [ 00:32:30.127 { 00:32:30.127 "trtype": "TCP" 00:32:30.127 } 00:32:30.128 ] 00:32:30.128 }, 00:32:30.128 { 00:32:30.128 "admin_qpairs": 1, 00:32:30.128 "completed_nvme_io": 166, 00:32:30.128 "current_admin_qpairs": 0, 00:32:30.128 "current_io_qpairs": 0, 00:32:30.128 "io_qpairs": 18, 00:32:30.128 "name": "nvmf_tgt_poll_group_003", 00:32:30.128 "pending_bdev_io": 0, 00:32:30.128 "transports": [ 00:32:30.128 { 00:32:30.128 "trtype": "TCP" 00:32:30.128 } 00:32:30.128 ] 00:32:30.128 } 00:32:30.128 ], 00:32:30.128 "tick_rate": 2200000000 00:32:30.128 }' 00:32:30.128 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:32:30.128 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:32:30.128 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:32:30.128 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:32:30.128 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:32:30.128 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:32:30.128 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:32:30.128 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:32:30.128 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:32:30.128 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:32:30.128 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:32:30.128 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:32:30.128 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:32:30.128 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:30.128 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:32:30.128 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:30.128 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:32:30.128 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:30.128 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:30.128 rmmod nvme_tcp 00:32:30.128 rmmod nvme_fabrics 00:32:30.128 rmmod nvme_keyring 00:32:30.128 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:30.128 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:32:30.128 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:32:30.128 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@515 -- # '[' -n 73742 ']' 00:32:30.128 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # killprocess 73742 00:32:30.128 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 73742 ']' 00:32:30.128 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 73742 00:32:30.128 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:32:30.387 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:30.387 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73742 00:32:30.387 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:30.387 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:30.387 killing process with pid 73742 00:32:30.387 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73742' 00:32:30.387 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 73742 00:32:30.387 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 73742 00:32:30.646 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:30.646 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:30.646 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:30.646 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:32:30.646 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:30.646 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-save 00:32:30.646 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-restore 00:32:30.646 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:30.646 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:32:30.646 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:32:30.646 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:32:30.646 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:32:30.646 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:32:30.646 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:32:30.646 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:32:30.646 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:32:30.646 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:32:30.646 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:32:30.646 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:32:30.646 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:32:30.646 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:30.646 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:30.646 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:32:30.647 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:30.647 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:30.647 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:30.906 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@300 -- # return 0 00:32:30.906 00:32:30.906 real 0m18.729s 00:32:30.906 user 1m8.680s 00:32:30.906 sys 0m2.721s 00:32:30.906 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:30.906 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:30.906 ************************************ 00:32:30.906 END TEST nvmf_rpc 00:32:30.906 ************************************ 00:32:30.906 20:31:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:32:30.906 20:31:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:30.906 20:31:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:30.906 20:31:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:32:30.906 ************************************ 00:32:30.906 START TEST nvmf_invalid 00:32:30.906 ************************************ 00:32:30.906 20:31:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:32:30.906 * Looking for test storage... 00:32:30.906 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:32:30.906 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:30.906 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lcov --version 00:32:30.906 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:30.906 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:30.906 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:30.906 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:30.906 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:30.906 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:32:30.906 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:32:30.906 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:32:30.906 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:32:30.906 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:32:30.906 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:32:30.906 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:32:30.906 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:30.906 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:32:30.906 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:32:30.906 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:30.906 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:31.167 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:32:31.167 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:32:31.167 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:31.167 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:32:31.167 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:32:31.167 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:32:31.167 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:32:31.167 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:31.167 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:32:31.167 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:32:31.167 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:31.167 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:31.167 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:32:31.167 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:31.167 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:31.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.167 --rc genhtml_branch_coverage=1 00:32:31.167 --rc genhtml_function_coverage=1 00:32:31.167 --rc genhtml_legend=1 00:32:31.167 --rc geninfo_all_blocks=1 00:32:31.167 --rc geninfo_unexecuted_blocks=1 00:32:31.167 00:32:31.167 ' 00:32:31.167 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:31.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.167 --rc genhtml_branch_coverage=1 00:32:31.167 --rc genhtml_function_coverage=1 00:32:31.167 --rc genhtml_legend=1 00:32:31.167 --rc geninfo_all_blocks=1 00:32:31.167 --rc geninfo_unexecuted_blocks=1 00:32:31.167 00:32:31.167 ' 00:32:31.167 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:31.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.167 --rc genhtml_branch_coverage=1 00:32:31.168 --rc genhtml_function_coverage=1 00:32:31.168 --rc genhtml_legend=1 00:32:31.168 --rc geninfo_all_blocks=1 00:32:31.168 --rc geninfo_unexecuted_blocks=1 00:32:31.168 00:32:31.168 ' 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:31.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.168 --rc genhtml_branch_coverage=1 00:32:31.168 --rc genhtml_function_coverage=1 00:32:31.168 --rc genhtml_legend=1 00:32:31.168 --rc geninfo_all_blocks=1 00:32:31.168 --rc geninfo_unexecuted_blocks=1 00:32:31.168 00:32:31.168 ' 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:31.168 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@458 -- # nvmf_veth_init 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:31.168 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:32:31.169 Cannot find device "nvmf_init_br" 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:32:31.169 Cannot find device "nvmf_init_br2" 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:32:31.169 Cannot find device "nvmf_tgt_br" 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # true 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:32:31.169 Cannot find device "nvmf_tgt_br2" 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # true 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:32:31.169 Cannot find device "nvmf_init_br" 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # true 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:32:31.169 Cannot find device "nvmf_init_br2" 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # true 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:32:31.169 Cannot find device "nvmf_tgt_br" 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # true 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:32:31.169 Cannot find device "nvmf_tgt_br2" 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # true 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:32:31.169 Cannot find device "nvmf_br" 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # true 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:32:31.169 Cannot find device "nvmf_init_if" 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # true 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:32:31.169 Cannot find device "nvmf_init_if2" 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # true 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:31.169 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # true 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:31.169 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # true 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:31.169 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:31.429 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:31.429 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:32:31.429 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:32:31.429 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:32:31.429 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:32:31.429 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:32:31.429 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:32:31.429 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:32:31.429 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:32:31.429 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:32:31.429 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:31.429 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:31.429 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:31.429 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:32:31.429 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:32:31.429 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:32:31.429 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:32:31.429 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:31.429 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:31.429 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:31.430 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:32:31.430 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:32:31.430 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:32:31.430 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:31.430 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:32:31.430 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:32:31.430 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:31.430 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:32:31.430 00:32:31.430 --- 10.0.0.3 ping statistics --- 00:32:31.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:31.430 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:32:31.430 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:32:31.430 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:32:31.430 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:32:31.430 00:32:31.430 --- 10.0.0.4 ping statistics --- 00:32:31.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:31.430 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:32:31.430 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:31.430 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:31.430 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:32:31.430 00:32:31.430 --- 10.0.0.1 ping statistics --- 00:32:31.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:31.430 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:32:31.430 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:32:31.430 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:31.430 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:32:31.430 00:32:31.430 --- 10.0.0.2 ping statistics --- 00:32:31.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:31.430 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:32:31.430 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:31.430 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@459 -- # return 0 00:32:31.430 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:31.430 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:31.430 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:31.430 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:31.430 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:31.430 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:31.430 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:31.430 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:32:31.430 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:31.430 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:31.430 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:32:31.430 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # nvmfpid=74290 00:32:31.430 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:31.430 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # waitforlisten 74290 00:32:31.430 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 74290 ']' 00:32:31.430 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:31.430 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:31.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:31.430 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:31.430 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:31.430 20:31:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:32:31.697 [2024-10-01 20:31:26.706812] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:32:31.697 [2024-10-01 20:31:26.706943] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:31.697 [2024-10-01 20:31:26.852749] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:31.697 [2024-10-01 20:31:26.911513] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:31.697 [2024-10-01 20:31:26.911563] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:31.697 [2024-10-01 20:31:26.911574] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:31.697 [2024-10-01 20:31:26.911583] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:31.697 [2024-10-01 20:31:26.911590] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:31.697 [2024-10-01 20:31:26.911759] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:31.697 [2024-10-01 20:31:26.911945] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:31.697 [2024-10-01 20:31:26.911852] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:32:31.697 [2024-10-01 20:31:26.911946] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:32:31.955 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:31.955 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:32:31.955 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:31.955 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:31.955 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:32:31.955 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:31.955 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:32:31.955 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode15384 00:32:32.213 [2024-10-01 20:31:27.371231] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:32:32.213 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/10/01 20:31:27 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode15384 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:32:32.213 request: 00:32:32.213 { 00:32:32.213 "method": "nvmf_create_subsystem", 00:32:32.213 "params": { 00:32:32.213 "nqn": "nqn.2016-06.io.spdk:cnode15384", 00:32:32.213 "tgt_name": "foobar" 00:32:32.213 } 00:32:32.213 } 00:32:32.213 Got JSON-RPC error response 00:32:32.213 GoRPCClient: error on JSON-RPC call' 00:32:32.213 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/10/01 20:31:27 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode15384 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:32:32.213 request: 00:32:32.213 { 00:32:32.213 "method": "nvmf_create_subsystem", 00:32:32.213 "params": { 00:32:32.213 "nqn": "nqn.2016-06.io.spdk:cnode15384", 00:32:32.213 "tgt_name": "foobar" 00:32:32.213 } 00:32:32.213 } 00:32:32.213 Got JSON-RPC error response 00:32:32.213 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:32:32.213 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:32:32.213 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode12540 00:32:32.470 [2024-10-01 20:31:27.711540] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12540: invalid serial number 'SPDKISFASTANDAWESOME' 00:32:32.729 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/10/01 20:31:27 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode12540 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:32:32.729 request: 00:32:32.729 { 00:32:32.729 "method": "nvmf_create_subsystem", 00:32:32.729 "params": { 00:32:32.729 "nqn": "nqn.2016-06.io.spdk:cnode12540", 00:32:32.729 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:32:32.729 } 00:32:32.729 } 00:32:32.729 Got JSON-RPC error response 00:32:32.729 GoRPCClient: error on JSON-RPC call' 00:32:32.729 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/10/01 20:31:27 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode12540 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:32:32.729 request: 00:32:32.729 { 00:32:32.729 "method": "nvmf_create_subsystem", 00:32:32.729 "params": { 00:32:32.729 "nqn": "nqn.2016-06.io.spdk:cnode12540", 00:32:32.729 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:32:32.729 } 00:32:32.729 } 00:32:32.729 Got JSON-RPC error response 00:32:32.729 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:32:32.729 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:32:32.729 20:31:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode18096 00:32:32.988 [2024-10-01 20:31:28.055868] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18096: invalid model number 'SPDK_Controller' 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/10/01 20:31:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode18096], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:32:32.988 request: 00:32:32.988 { 00:32:32.988 "method": "nvmf_create_subsystem", 00:32:32.988 "params": { 00:32:32.988 "nqn": "nqn.2016-06.io.spdk:cnode18096", 00:32:32.988 "model_number": "SPDK_Controller\u001f" 00:32:32.988 } 00:32:32.988 } 00:32:32.988 Got JSON-RPC error response 00:32:32.988 GoRPCClient: error on JSON-RPC call' 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/10/01 20:31:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode18096], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:32:32.988 request: 00:32:32.988 { 00:32:32.988 "method": "nvmf_create_subsystem", 00:32:32.988 "params": { 00:32:32.988 "nqn": "nqn.2016-06.io.spdk:cnode18096", 00:32:32.988 "model_number": "SPDK_Controller\u001f" 00:32:32.988 } 00:32:32.988 } 00:32:32.988 Got JSON-RPC error response 00:32:32.988 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:32:32.988 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ w == \- ]] 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'w-O~^?TW>7*x=r72SjFL' 00:32:32.989 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'w-O~^?TW>7*x=r72SjFL' nqn.2016-06.io.spdk:cnode22035 00:32:33.556 [2024-10-01 20:31:28.588361] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22035: invalid serial number 'w-O~^?TW>7*x=r72SjFL' 00:32:33.556 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/10/01 20:31:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode22035 serial_number:w-O~^?TW>7*x=r72SjFL], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN w-O~^?TW>7*x=r72SjFL 00:32:33.556 request: 00:32:33.556 { 00:32:33.556 "method": "nvmf_create_subsystem", 00:32:33.556 "params": { 00:32:33.556 "nqn": "nqn.2016-06.io.spdk:cnode22035", 00:32:33.556 "serial_number": "w-O~^?TW\u007f>7*x=r72SjFL" 00:32:33.556 } 00:32:33.556 } 00:32:33.556 Got JSON-RPC error response 00:32:33.556 GoRPCClient: error on JSON-RPC call' 00:32:33.556 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/10/01 20:31:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode22035 serial_number:w-O~^?TW>7*x=r72SjFL], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN w-O~^?TW>7*x=r72SjFL 00:32:33.556 request: 00:32:33.556 { 00:32:33.556 "method": "nvmf_create_subsystem", 00:32:33.556 "params": { 00:32:33.556 "nqn": "nqn.2016-06.io.spdk:cnode22035", 00:32:33.556 "serial_number": "w-O~^?TW\u007f>7*x=r72SjFL" 00:32:33.556 } 00:32:33.556 } 00:32:33.556 Got JSON-RPC error response 00:32:33.556 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:32:33.556 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:32:33.556 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:32:33.556 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:32:33.556 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:32:33.556 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:32:33.556 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:32:33.556 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.556 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:32:33.556 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:32:33.556 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:32:33.556 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:32:33.557 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:32:33.558 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:32:33.816 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:32:33.816 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:33.816 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.816 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:32:33.816 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:32:33.816 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:32:33.816 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:32:33.816 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:32:33.816 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ * == \- ]] 00:32:33.816 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '*KTK#2+oe6S"]'\''m>*EY\Y/"l'\''j!bs9IQ|}4@Aa}y(' 00:32:33.816 20:31:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d '*KTK#2+oe6S"]'\''m>*EY\Y/"l'\''j!bs9IQ|}4@Aa}y(' nqn.2016-06.io.spdk:cnode8071 00:32:34.074 [2024-10-01 20:31:29.096795] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8071: invalid model number '*KTK#2+oe6S"]'m>*EY\Y/"l'j!bs9IQ|}4@Aa}y(' 00:32:34.074 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/10/01 20:31:29 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:*KTK#2+oe6S"]'\''m>*EY\Y/"l'\''j!bs9IQ|}4@Aa}y( nqn:nqn.2016-06.io.spdk:cnode8071], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN *KTK#2+oe6S"]'\''m>*EY\Y/"l'\''j!bs9IQ|}4@Aa}y( 00:32:34.074 request: 00:32:34.074 { 00:32:34.074 "method": "nvmf_create_subsystem", 00:32:34.074 "params": { 00:32:34.074 "nqn": "nqn.2016-06.io.spdk:cnode8071", 00:32:34.074 "model_number": "*KTK#2+oe6S\"]'\''m>*EY\\Y/\"l'\''j!bs9IQ|}4@Aa}y(" 00:32:34.074 } 00:32:34.074 } 00:32:34.074 Got JSON-RPC error response 00:32:34.074 GoRPCClient: error on JSON-RPC call' 00:32:34.074 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/10/01 20:31:29 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:*KTK#2+oe6S"]'m>*EY\Y/"l'j!bs9IQ|}4@Aa}y( nqn:nqn.2016-06.io.spdk:cnode8071], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN *KTK#2+oe6S"]'m>*EY\Y/"l'j!bs9IQ|}4@Aa}y( 00:32:34.074 request: 00:32:34.074 { 00:32:34.074 "method": "nvmf_create_subsystem", 00:32:34.074 "params": { 00:32:34.074 "nqn": "nqn.2016-06.io.spdk:cnode8071", 00:32:34.074 "model_number": "*KTK#2+oe6S\"]'m>*EY\\Y/\"l'j!bs9IQ|}4@Aa}y(" 00:32:34.074 } 00:32:34.074 } 00:32:34.074 Got JSON-RPC error response 00:32:34.074 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:32:34.074 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:32:34.331 [2024-10-01 20:31:29.397129] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:34.331 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:32:34.589 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:32:34.589 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:32:34.589 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:32:34.589 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:32:34.589 20:31:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:32:34.846 [2024-10-01 20:31:30.001907] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:32:34.846 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/10/01 20:31:30 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:32:34.846 request: 00:32:34.846 { 00:32:34.846 "method": "nvmf_subsystem_remove_listener", 00:32:34.846 "params": { 00:32:34.846 "nqn": "nqn.2016-06.io.spdk:cnode", 00:32:34.846 "listen_address": { 00:32:34.846 "trtype": "tcp", 00:32:34.846 "traddr": "", 00:32:34.846 "trsvcid": "4421" 00:32:34.846 } 00:32:34.846 } 00:32:34.846 } 00:32:34.846 Got JSON-RPC error response 00:32:34.846 GoRPCClient: error on JSON-RPC call' 00:32:34.846 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/10/01 20:31:30 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:32:34.846 request: 00:32:34.846 { 00:32:34.846 "method": "nvmf_subsystem_remove_listener", 00:32:34.846 "params": { 00:32:34.846 "nqn": "nqn.2016-06.io.spdk:cnode", 00:32:34.846 "listen_address": { 00:32:34.846 "trtype": "tcp", 00:32:34.846 "traddr": "", 00:32:34.846 "trsvcid": "4421" 00:32:34.846 } 00:32:34.846 } 00:32:34.846 } 00:32:34.846 Got JSON-RPC error response 00:32:34.846 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:32:34.846 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13293 -i 0 00:32:35.104 [2024-10-01 20:31:30.270054] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13293: invalid cntlid range [0-65519] 00:32:35.104 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/10/01 20:31:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode13293], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:32:35.104 request: 00:32:35.104 { 00:32:35.104 "method": "nvmf_create_subsystem", 00:32:35.104 "params": { 00:32:35.104 "nqn": "nqn.2016-06.io.spdk:cnode13293", 00:32:35.104 "min_cntlid": 0 00:32:35.104 } 00:32:35.104 } 00:32:35.104 Got JSON-RPC error response 00:32:35.104 GoRPCClient: error on JSON-RPC call' 00:32:35.104 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/10/01 20:31:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode13293], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:32:35.104 request: 00:32:35.104 { 00:32:35.104 "method": "nvmf_create_subsystem", 00:32:35.104 "params": { 00:32:35.104 "nqn": "nqn.2016-06.io.spdk:cnode13293", 00:32:35.104 "min_cntlid": 0 00:32:35.104 } 00:32:35.104 } 00:32:35.104 Got JSON-RPC error response 00:32:35.104 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:32:35.104 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4504 -i 65520 00:32:35.669 [2024-10-01 20:31:30.626413] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4504: invalid cntlid range [65520-65519] 00:32:35.669 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/10/01 20:31:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode4504], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:32:35.669 request: 00:32:35.669 { 00:32:35.669 "method": "nvmf_create_subsystem", 00:32:35.669 "params": { 00:32:35.669 "nqn": "nqn.2016-06.io.spdk:cnode4504", 00:32:35.669 "min_cntlid": 65520 00:32:35.669 } 00:32:35.669 } 00:32:35.669 Got JSON-RPC error response 00:32:35.669 GoRPCClient: error on JSON-RPC call' 00:32:35.669 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/10/01 20:31:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode4504], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:32:35.669 request: 00:32:35.669 { 00:32:35.669 "method": "nvmf_create_subsystem", 00:32:35.669 "params": { 00:32:35.669 "nqn": "nqn.2016-06.io.spdk:cnode4504", 00:32:35.669 "min_cntlid": 65520 00:32:35.669 } 00:32:35.669 } 00:32:35.669 Got JSON-RPC error response 00:32:35.669 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:32:35.669 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11399 -I 0 00:32:35.669 [2024-10-01 20:31:30.914663] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11399: invalid cntlid range [1-0] 00:32:35.928 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/10/01 20:31:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode11399], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:32:35.928 request: 00:32:35.928 { 00:32:35.928 "method": "nvmf_create_subsystem", 00:32:35.928 "params": { 00:32:35.928 "nqn": "nqn.2016-06.io.spdk:cnode11399", 00:32:35.928 "max_cntlid": 0 00:32:35.928 } 00:32:35.928 } 00:32:35.928 Got JSON-RPC error response 00:32:35.928 GoRPCClient: error on JSON-RPC call' 00:32:35.928 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/10/01 20:31:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode11399], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:32:35.928 request: 00:32:35.928 { 00:32:35.928 "method": "nvmf_create_subsystem", 00:32:35.928 "params": { 00:32:35.928 "nqn": "nqn.2016-06.io.spdk:cnode11399", 00:32:35.928 "max_cntlid": 0 00:32:35.928 } 00:32:35.928 } 00:32:35.928 Got JSON-RPC error response 00:32:35.928 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:32:35.928 20:31:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode781 -I 65520 00:32:36.185 [2024-10-01 20:31:31.232828] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode781: invalid cntlid range [1-65520] 00:32:36.185 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/10/01 20:31:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode781], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:32:36.185 request: 00:32:36.185 { 00:32:36.185 "method": "nvmf_create_subsystem", 00:32:36.185 "params": { 00:32:36.185 "nqn": "nqn.2016-06.io.spdk:cnode781", 00:32:36.185 "max_cntlid": 65520 00:32:36.185 } 00:32:36.185 } 00:32:36.185 Got JSON-RPC error response 00:32:36.185 GoRPCClient: error on JSON-RPC call' 00:32:36.185 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/10/01 20:31:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode781], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:32:36.185 request: 00:32:36.185 { 00:32:36.185 "method": "nvmf_create_subsystem", 00:32:36.185 "params": { 00:32:36.185 "nqn": "nqn.2016-06.io.spdk:cnode781", 00:32:36.185 "max_cntlid": 65520 00:32:36.185 } 00:32:36.185 } 00:32:36.185 Got JSON-RPC error response 00:32:36.185 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:32:36.185 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11894 -i 6 -I 5 00:32:36.444 [2024-10-01 20:31:31.573187] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11894: invalid cntlid range [6-5] 00:32:36.444 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/10/01 20:31:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode11894], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:32:36.444 request: 00:32:36.444 { 00:32:36.444 "method": "nvmf_create_subsystem", 00:32:36.444 "params": { 00:32:36.444 "nqn": "nqn.2016-06.io.spdk:cnode11894", 00:32:36.444 "min_cntlid": 6, 00:32:36.444 "max_cntlid": 5 00:32:36.444 } 00:32:36.444 } 00:32:36.444 Got JSON-RPC error response 00:32:36.444 GoRPCClient: error on JSON-RPC call' 00:32:36.444 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/10/01 20:31:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode11894], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:32:36.444 request: 00:32:36.444 { 00:32:36.444 "method": "nvmf_create_subsystem", 00:32:36.444 "params": { 00:32:36.444 "nqn": "nqn.2016-06.io.spdk:cnode11894", 00:32:36.444 "min_cntlid": 6, 00:32:36.444 "max_cntlid": 5 00:32:36.444 } 00:32:36.444 } 00:32:36.444 Got JSON-RPC error response 00:32:36.444 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:32:36.444 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:32:36.703 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:32:36.703 { 00:32:36.703 "name": "foobar", 00:32:36.703 "method": "nvmf_delete_target", 00:32:36.703 "req_id": 1 00:32:36.703 } 00:32:36.703 Got JSON-RPC error response 00:32:36.703 response: 00:32:36.703 { 00:32:36.703 "code": -32602, 00:32:36.703 "message": "The specified target doesn'\''t exist, cannot delete it." 00:32:36.703 }' 00:32:36.703 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:32:36.703 { 00:32:36.703 "name": "foobar", 00:32:36.703 "method": "nvmf_delete_target", 00:32:36.703 "req_id": 1 00:32:36.703 } 00:32:36.703 Got JSON-RPC error response 00:32:36.703 response: 00:32:36.703 { 00:32:36.703 "code": -32602, 00:32:36.703 "message": "The specified target doesn't exist, cannot delete it." 00:32:36.703 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:32:36.703 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:32:36.703 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:32:36.703 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:36.703 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:32:36.703 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:36.703 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:32:36.703 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:36.703 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:36.703 rmmod nvme_tcp 00:32:36.703 rmmod nvme_fabrics 00:32:36.703 rmmod nvme_keyring 00:32:36.703 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:36.703 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:32:36.703 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:32:36.703 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@515 -- # '[' -n 74290 ']' 00:32:36.703 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # killprocess 74290 00:32:36.703 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 74290 ']' 00:32:36.703 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 74290 00:32:36.703 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:32:36.703 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:36.703 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74290 00:32:36.703 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:36.703 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:36.703 killing process with pid 74290 00:32:36.703 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74290' 00:32:36.703 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 74290 00:32:36.703 20:31:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 74290 00:32:36.961 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:36.961 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:36.961 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:36.961 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:32:36.961 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-save 00:32:36.961 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-restore 00:32:36.961 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:36.961 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:36.961 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:32:36.961 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:32:36.961 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:32:36.961 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:32:36.961 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:32:36.961 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:32:36.961 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:32:36.961 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:32:36.961 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:32:36.961 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:32:36.961 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:32:37.220 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:32:37.220 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:37.220 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:37.220 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:32:37.220 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:37.220 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:37.220 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:37.220 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@300 -- # return 0 00:32:37.220 00:32:37.220 real 0m6.343s 00:32:37.220 user 0m24.546s 00:32:37.220 sys 0m1.383s 00:32:37.220 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:37.220 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:32:37.220 ************************************ 00:32:37.220 END TEST nvmf_invalid 00:32:37.220 ************************************ 00:32:37.220 20:31:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:32:37.220 20:31:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:37.220 20:31:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:37.220 20:31:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:32:37.220 ************************************ 00:32:37.220 START TEST nvmf_connect_stress 00:32:37.220 ************************************ 00:32:37.220 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:32:37.220 * Looking for test storage... 00:32:37.220 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:32:37.220 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:37.220 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:32:37.220 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:37.479 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:37.479 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:37.479 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:37.479 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:37.479 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:32:37.479 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:32:37.479 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:32:37.479 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:32:37.479 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:32:37.479 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:32:37.479 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:32:37.479 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:37.479 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:32:37.479 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:32:37.479 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:37.479 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:37.479 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:32:37.479 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:32:37.479 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:37.479 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:32:37.479 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:32:37.479 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:32:37.479 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:32:37.479 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:37.479 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:32:37.479 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:32:37.479 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:37.479 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:37.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:37.480 --rc genhtml_branch_coverage=1 00:32:37.480 --rc genhtml_function_coverage=1 00:32:37.480 --rc genhtml_legend=1 00:32:37.480 --rc geninfo_all_blocks=1 00:32:37.480 --rc geninfo_unexecuted_blocks=1 00:32:37.480 00:32:37.480 ' 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:37.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:37.480 --rc genhtml_branch_coverage=1 00:32:37.480 --rc genhtml_function_coverage=1 00:32:37.480 --rc genhtml_legend=1 00:32:37.480 --rc geninfo_all_blocks=1 00:32:37.480 --rc geninfo_unexecuted_blocks=1 00:32:37.480 00:32:37.480 ' 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:37.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:37.480 --rc genhtml_branch_coverage=1 00:32:37.480 --rc genhtml_function_coverage=1 00:32:37.480 --rc genhtml_legend=1 00:32:37.480 --rc geninfo_all_blocks=1 00:32:37.480 --rc geninfo_unexecuted_blocks=1 00:32:37.480 00:32:37.480 ' 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:37.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:37.480 --rc genhtml_branch_coverage=1 00:32:37.480 --rc genhtml_function_coverage=1 00:32:37.480 --rc genhtml_legend=1 00:32:37.480 --rc geninfo_all_blocks=1 00:32:37.480 --rc geninfo_unexecuted_blocks=1 00:32:37.480 00:32:37.480 ' 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:37.480 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:37.480 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@458 -- # nvmf_veth_init 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:32:37.481 Cannot find device "nvmf_init_br" 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:32:37.481 Cannot find device "nvmf_init_br2" 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:32:37.481 Cannot find device "nvmf_tgt_br" 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # true 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:32:37.481 Cannot find device "nvmf_tgt_br2" 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # true 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:32:37.481 Cannot find device "nvmf_init_br" 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # true 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:32:37.481 Cannot find device "nvmf_init_br2" 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # true 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:32:37.481 Cannot find device "nvmf_tgt_br" 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # true 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:32:37.481 Cannot find device "nvmf_tgt_br2" 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # true 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:32:37.481 Cannot find device "nvmf_br" 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # true 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:32:37.481 Cannot find device "nvmf_init_if" 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # true 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:32:37.481 Cannot find device "nvmf_init_if2" 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # true 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:37.481 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # true 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:37.481 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # true 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:37.481 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:37.771 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:37.771 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:37.771 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:37.771 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:32:37.772 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:37.772 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:32:37.772 00:32:37.772 --- 10.0.0.3 ping statistics --- 00:32:37.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:37.772 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:32:37.772 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:32:37.772 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:32:37.772 00:32:37.772 --- 10.0.0.4 ping statistics --- 00:32:37.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:37.772 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:37.772 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:37.772 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:32:37.772 00:32:37.772 --- 10.0.0.1 ping statistics --- 00:32:37.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:37.772 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:32:37.772 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:37.772 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:32:37.772 00:32:37.772 --- 10.0.0.2 ping statistics --- 00:32:37.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:37.772 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@459 -- # return 0 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # nvmfpid=74844 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # waitforlisten 74844 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 74844 ']' 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:37.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:37.772 20:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:32:38.036 [2024-10-01 20:31:33.032066] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:32:38.036 [2024-10-01 20:31:33.032170] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:38.036 [2024-10-01 20:31:33.170324] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:38.036 [2024-10-01 20:31:33.232244] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:38.036 [2024-10-01 20:31:33.232305] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:38.036 [2024-10-01 20:31:33.232317] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:38.036 [2024-10-01 20:31:33.232326] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:38.036 [2024-10-01 20:31:33.232333] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:38.036 [2024-10-01 20:31:33.232452] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:32:38.036 [2024-10-01 20:31:33.233050] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:32:38.036 [2024-10-01 20:31:33.233094] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:38.294 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:38.294 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:32:38.294 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:38.294 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:38.294 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:32:38.294 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:38.294 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:38.294 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.294 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:32:38.294 [2024-10-01 20:31:33.440418] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:38.294 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.294 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:38.294 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.294 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:32:38.294 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.294 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:32:38.294 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.294 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:32:38.294 [2024-10-01 20:31:33.458062] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:38.294 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.294 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:32:38.294 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.294 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:32:38.294 NULL1 00:32:38.294 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.294 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=74883 00:32:38.294 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:32:38.294 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:32:38.294 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74883 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.295 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:32:38.861 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.861 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74883 00:32:38.861 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:32:38.861 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.861 20:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:32:39.119 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.119 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74883 00:32:39.119 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:32:39.119 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.119 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:32:39.377 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.377 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74883 00:32:39.377 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:32:39.377 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.377 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:32:39.635 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.635 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74883 00:32:39.635 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:32:39.636 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.636 20:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:32:40.200 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.200 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74883 00:32:40.200 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:32:40.200 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.200 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:32:40.459 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.459 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74883 00:32:40.459 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:32:40.459 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.459 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:32:40.718 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.718 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74883 00:32:40.718 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:32:40.718 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.718 20:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:32:40.976 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.976 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74883 00:32:40.976 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:32:40.976 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.976 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:32:41.235 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.235 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74883 00:32:41.235 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:32:41.235 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.235 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:32:41.802 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.802 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74883 00:32:41.802 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:32:41.803 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.803 20:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:32:42.060 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.060 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74883 00:32:42.060 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:32:42.060 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.060 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:32:42.319 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.319 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74883 00:32:42.319 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:32:42.319 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.319 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:32:42.577 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.577 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74883 00:32:42.577 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:32:42.577 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.577 20:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:32:42.835 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.835 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74883 00:32:42.835 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:32:42.835 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.835 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:32:43.402 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.402 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74883 00:32:43.402 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:32:43.402 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.402 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:32:43.660 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.660 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74883 00:32:43.660 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:32:43.660 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.660 20:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:32:43.917 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.917 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74883 00:32:43.917 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:32:43.917 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.917 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:32:44.176 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.176 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74883 00:32:44.176 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:32:44.176 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.176 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:32:44.435 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.435 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74883 00:32:44.435 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:32:44.435 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.435 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:32:45.026 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.026 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74883 00:32:45.026 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:32:45.026 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.026 20:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:32:45.305 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.305 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74883 00:32:45.305 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:32:45.305 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.305 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:32:45.563 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.563 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74883 00:32:45.563 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:32:45.563 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.563 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:32:45.821 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.821 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74883 00:32:45.821 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:32:45.821 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.821 20:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:32:46.079 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.079 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74883 00:32:46.079 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:32:46.079 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.079 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:32:46.646 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.646 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74883 00:32:46.647 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:32:46.647 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.647 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:32:46.905 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.905 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74883 00:32:46.905 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:32:46.905 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.905 20:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:32:47.163 20:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.163 20:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74883 00:32:47.163 20:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:32:47.163 20:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.163 20:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:32:47.422 20:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.422 20:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74883 00:32:47.422 20:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:32:47.422 20:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.422 20:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:32:47.679 20:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.680 20:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74883 00:32:47.680 20:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:32:47.680 20:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.680 20:31:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:32:48.247 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.247 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74883 00:32:48.247 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:32:48.247 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.247 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:32:48.507 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.507 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74883 00:32:48.507 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:32:48.507 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.507 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:32:48.507 Testing NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:32:48.765 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.766 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74883 00:32:48.766 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (74883) - No such process 00:32:48.766 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 74883 00:32:48.766 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:32:48.766 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:32:48.766 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:32:48.766 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:48.766 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:32:48.766 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:48.766 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:32:48.766 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:48.766 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:48.766 rmmod nvme_tcp 00:32:48.766 rmmod nvme_fabrics 00:32:48.766 rmmod nvme_keyring 00:32:48.766 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:48.766 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:32:48.766 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:32:48.766 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@515 -- # '[' -n 74844 ']' 00:32:48.766 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # killprocess 74844 00:32:48.766 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 74844 ']' 00:32:48.766 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 74844 00:32:48.766 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:32:48.766 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:48.766 20:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74844 00:32:49.024 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:49.024 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:49.024 killing process with pid 74844 00:32:49.024 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74844' 00:32:49.024 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 74844 00:32:49.024 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 74844 00:32:49.024 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:49.024 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:49.024 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:49.024 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:32:49.024 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-save 00:32:49.024 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:49.024 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-restore 00:32:49.024 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:49.024 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:32:49.024 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:32:49.024 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:32:49.283 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:32:49.283 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:32:49.283 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:32:49.283 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:32:49.283 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:32:49.283 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:32:49.283 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:32:49.283 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:32:49.283 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:32:49.283 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:49.283 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:49.283 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:32:49.283 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:49.283 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:49.283 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:49.283 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@300 -- # return 0 00:32:49.283 00:32:49.283 real 0m12.131s 00:32:49.283 user 0m39.484s 00:32:49.283 sys 0m3.284s 00:32:49.283 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:49.283 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:32:49.283 ************************************ 00:32:49.283 END TEST nvmf_connect_stress 00:32:49.283 ************************************ 00:32:49.283 20:31:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:32:49.283 20:31:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:49.283 20:31:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:49.283 20:31:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:32:49.283 ************************************ 00:32:49.283 START TEST nvmf_fused_ordering 00:32:49.283 ************************************ 00:32:49.283 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:32:49.542 * Looking for test storage... 00:32:49.542 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:32:49.542 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:49.542 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lcov --version 00:32:49.542 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:49.542 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:49.542 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:49.542 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:49.542 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:49.542 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:32:49.542 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:32:49.542 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:32:49.542 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:32:49.542 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:32:49.542 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:32:49.542 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:32:49.542 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:49.542 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:49.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.543 --rc genhtml_branch_coverage=1 00:32:49.543 --rc genhtml_function_coverage=1 00:32:49.543 --rc genhtml_legend=1 00:32:49.543 --rc geninfo_all_blocks=1 00:32:49.543 --rc geninfo_unexecuted_blocks=1 00:32:49.543 00:32:49.543 ' 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:49.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.543 --rc genhtml_branch_coverage=1 00:32:49.543 --rc genhtml_function_coverage=1 00:32:49.543 --rc genhtml_legend=1 00:32:49.543 --rc geninfo_all_blocks=1 00:32:49.543 --rc geninfo_unexecuted_blocks=1 00:32:49.543 00:32:49.543 ' 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:49.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.543 --rc genhtml_branch_coverage=1 00:32:49.543 --rc genhtml_function_coverage=1 00:32:49.543 --rc genhtml_legend=1 00:32:49.543 --rc geninfo_all_blocks=1 00:32:49.543 --rc geninfo_unexecuted_blocks=1 00:32:49.543 00:32:49.543 ' 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:49.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.543 --rc genhtml_branch_coverage=1 00:32:49.543 --rc genhtml_function_coverage=1 00:32:49.543 --rc genhtml_legend=1 00:32:49.543 --rc geninfo_all_blocks=1 00:32:49.543 --rc geninfo_unexecuted_blocks=1 00:32:49.543 00:32:49.543 ' 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:49.543 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@458 -- # nvmf_veth_init 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:32:49.543 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:32:49.544 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:32:49.544 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:49.544 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:32:49.544 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:49.544 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:32:49.544 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:49.544 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:32:49.544 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:49.544 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:49.544 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:49.544 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:49.544 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:49.544 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:49.544 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:32:49.544 Cannot find device "nvmf_init_br" 00:32:49.544 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:32:49.544 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:32:49.544 Cannot find device "nvmf_init_br2" 00:32:49.544 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:32:49.544 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:32:49.544 Cannot find device "nvmf_tgt_br" 00:32:49.544 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # true 00:32:49.544 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:32:49.544 Cannot find device "nvmf_tgt_br2" 00:32:49.544 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # true 00:32:49.544 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:32:49.544 Cannot find device "nvmf_init_br" 00:32:49.803 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # true 00:32:49.803 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:32:49.803 Cannot find device "nvmf_init_br2" 00:32:49.803 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # true 00:32:49.803 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:32:49.803 Cannot find device "nvmf_tgt_br" 00:32:49.803 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # true 00:32:49.803 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:32:49.803 Cannot find device "nvmf_tgt_br2" 00:32:49.803 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # true 00:32:49.803 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:32:49.803 Cannot find device "nvmf_br" 00:32:49.803 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # true 00:32:49.803 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:32:49.803 Cannot find device "nvmf_init_if" 00:32:49.803 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # true 00:32:49.803 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:32:49.803 Cannot find device "nvmf_init_if2" 00:32:49.803 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # true 00:32:49.803 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:49.803 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:49.803 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # true 00:32:49.803 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:49.803 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:49.803 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # true 00:32:49.803 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:32:49.803 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:49.803 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:32:49.803 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:49.803 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:49.804 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:49.804 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:49.804 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:49.804 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:32:49.804 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:32:49.804 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:32:49.804 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:32:49.804 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:32:49.804 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:32:49.804 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:32:49.804 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:32:49.804 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:32:49.804 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:49.804 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:49.804 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:49.804 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:32:49.804 20:31:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:32:49.804 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:32:49.804 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:32:49.804 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:49.804 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:50.062 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:50.062 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:32:50.062 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:32:50.062 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:32:50.062 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:50.062 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:32:50.063 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:32:50.063 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:50.063 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:32:50.063 00:32:50.063 --- 10.0.0.3 ping statistics --- 00:32:50.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:50.063 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:32:50.063 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:32:50.063 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:32:50.063 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:32:50.063 00:32:50.063 --- 10.0.0.4 ping statistics --- 00:32:50.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:50.063 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:32:50.063 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:50.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:50.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:32:50.063 00:32:50.063 --- 10.0.0.1 ping statistics --- 00:32:50.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:50.063 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:32:50.063 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:32:50.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:50.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:32:50.063 00:32:50.063 --- 10.0.0.2 ping statistics --- 00:32:50.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:50.063 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:32:50.063 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:50.063 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@459 -- # return 0 00:32:50.063 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:50.063 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:50.063 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:50.063 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:50.063 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:50.063 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:50.063 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:50.063 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:32:50.063 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:50.063 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:50.063 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:32:50.063 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # nvmfpid=75262 00:32:50.063 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # waitforlisten 75262 00:32:50.063 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:50.063 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 75262 ']' 00:32:50.063 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:50.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:50.063 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:50.063 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:50.063 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:50.063 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:32:50.063 [2024-10-01 20:31:45.181498] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:32:50.063 [2024-10-01 20:31:45.181591] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:50.322 [2024-10-01 20:31:45.322177] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:50.322 [2024-10-01 20:31:45.380794] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:50.322 [2024-10-01 20:31:45.380854] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:50.322 [2024-10-01 20:31:45.380865] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:50.322 [2024-10-01 20:31:45.380873] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:50.322 [2024-10-01 20:31:45.380881] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:50.322 [2024-10-01 20:31:45.380917] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:50.322 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:50.322 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:32:50.322 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:50.322 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:50.322 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:32:50.322 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:50.322 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:50.322 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.322 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:32:50.322 [2024-10-01 20:31:45.572741] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:50.580 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.580 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:50.581 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.581 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:32:50.581 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.581 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:32:50.581 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.581 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:32:50.581 [2024-10-01 20:31:45.588882] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:50.581 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.581 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:32:50.581 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.581 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:32:50.581 NULL1 00:32:50.581 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.581 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:32:50.581 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.581 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:32:50.581 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.581 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:32:50.581 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.581 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:32:50.581 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.581 20:31:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:50.581 [2024-10-01 20:31:45.641019] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:32:50.581 [2024-10-01 20:31:45.641069] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75299 ] 00:32:50.840 Attached to nqn.2016-06.io.spdk:cnode1 00:32:50.840 Namespace ID: 1 size: 1GB 00:32:50.840 fused_ordering(0) 00:32:50.840 fused_ordering(1) 00:32:50.840 fused_ordering(2) 00:32:50.840 fused_ordering(3) 00:32:50.840 fused_ordering(4) 00:32:50.840 fused_ordering(5) 00:32:50.840 fused_ordering(6) 00:32:50.840 fused_ordering(7) 00:32:50.840 fused_ordering(8) 00:32:50.840 fused_ordering(9) 00:32:50.840 fused_ordering(10) 00:32:50.840 fused_ordering(11) 00:32:50.840 fused_ordering(12) 00:32:50.840 fused_ordering(13) 00:32:50.840 fused_ordering(14) 00:32:50.840 fused_ordering(15) 00:32:50.840 fused_ordering(16) 00:32:50.840 fused_ordering(17) 00:32:50.840 fused_ordering(18) 00:32:50.840 fused_ordering(19) 00:32:50.840 fused_ordering(20) 00:32:50.840 fused_ordering(21) 00:32:50.840 fused_ordering(22) 00:32:50.840 fused_ordering(23) 00:32:50.840 fused_ordering(24) 00:32:50.840 fused_ordering(25) 00:32:50.840 fused_ordering(26) 00:32:50.840 fused_ordering(27) 00:32:50.840 fused_ordering(28) 00:32:50.840 fused_ordering(29) 00:32:50.840 fused_ordering(30) 00:32:50.840 fused_ordering(31) 00:32:50.840 fused_ordering(32) 00:32:50.840 fused_ordering(33) 00:32:50.840 fused_ordering(34) 00:32:50.840 fused_ordering(35) 00:32:50.840 fused_ordering(36) 00:32:50.840 fused_ordering(37) 00:32:50.840 fused_ordering(38) 00:32:50.840 fused_ordering(39) 00:32:50.840 fused_ordering(40) 00:32:50.840 fused_ordering(41) 00:32:50.840 fused_ordering(42) 00:32:50.840 fused_ordering(43) 00:32:50.840 fused_ordering(44) 00:32:50.840 fused_ordering(45) 00:32:50.840 fused_ordering(46) 00:32:50.840 fused_ordering(47) 00:32:50.840 fused_ordering(48) 00:32:50.840 fused_ordering(49) 00:32:50.840 fused_ordering(50) 00:32:50.840 fused_ordering(51) 00:32:50.840 fused_ordering(52) 00:32:50.840 fused_ordering(53) 00:32:50.840 fused_ordering(54) 00:32:50.840 fused_ordering(55) 00:32:50.840 fused_ordering(56) 00:32:50.840 fused_ordering(57) 00:32:50.840 fused_ordering(58) 00:32:50.840 fused_ordering(59) 00:32:50.840 fused_ordering(60) 00:32:50.840 fused_ordering(61) 00:32:50.840 fused_ordering(62) 00:32:50.840 fused_ordering(63) 00:32:50.840 fused_ordering(64) 00:32:50.840 fused_ordering(65) 00:32:50.840 fused_ordering(66) 00:32:50.840 fused_ordering(67) 00:32:50.840 fused_ordering(68) 00:32:50.840 fused_ordering(69) 00:32:50.840 fused_ordering(70) 00:32:50.840 fused_ordering(71) 00:32:50.840 fused_ordering(72) 00:32:50.840 fused_ordering(73) 00:32:50.840 fused_ordering(74) 00:32:50.840 fused_ordering(75) 00:32:50.840 fused_ordering(76) 00:32:50.840 fused_ordering(77) 00:32:50.840 fused_ordering(78) 00:32:50.840 fused_ordering(79) 00:32:50.840 fused_ordering(80) 00:32:50.840 fused_ordering(81) 00:32:50.840 fused_ordering(82) 00:32:50.840 fused_ordering(83) 00:32:50.840 fused_ordering(84) 00:32:50.840 fused_ordering(85) 00:32:50.840 fused_ordering(86) 00:32:50.840 fused_ordering(87) 00:32:50.840 fused_ordering(88) 00:32:50.840 fused_ordering(89) 00:32:50.840 fused_ordering(90) 00:32:50.840 fused_ordering(91) 00:32:50.840 fused_ordering(92) 00:32:50.840 fused_ordering(93) 00:32:50.840 fused_ordering(94) 00:32:50.840 fused_ordering(95) 00:32:50.840 fused_ordering(96) 00:32:50.840 fused_ordering(97) 00:32:50.840 fused_ordering(98) 00:32:50.840 fused_ordering(99) 00:32:50.840 fused_ordering(100) 00:32:50.840 fused_ordering(101) 00:32:50.840 fused_ordering(102) 00:32:50.840 fused_ordering(103) 00:32:50.840 fused_ordering(104) 00:32:50.840 fused_ordering(105) 00:32:50.840 fused_ordering(106) 00:32:50.840 fused_ordering(107) 00:32:50.840 fused_ordering(108) 00:32:50.840 fused_ordering(109) 00:32:50.840 fused_ordering(110) 00:32:50.840 fused_ordering(111) 00:32:50.840 fused_ordering(112) 00:32:50.840 fused_ordering(113) 00:32:50.840 fused_ordering(114) 00:32:50.840 fused_ordering(115) 00:32:50.840 fused_ordering(116) 00:32:50.840 fused_ordering(117) 00:32:50.840 fused_ordering(118) 00:32:50.840 fused_ordering(119) 00:32:50.840 fused_ordering(120) 00:32:50.840 fused_ordering(121) 00:32:50.840 fused_ordering(122) 00:32:50.840 fused_ordering(123) 00:32:50.840 fused_ordering(124) 00:32:50.840 fused_ordering(125) 00:32:50.840 fused_ordering(126) 00:32:50.840 fused_ordering(127) 00:32:50.840 fused_ordering(128) 00:32:50.840 fused_ordering(129) 00:32:50.840 fused_ordering(130) 00:32:50.840 fused_ordering(131) 00:32:50.840 fused_ordering(132) 00:32:50.840 fused_ordering(133) 00:32:50.840 fused_ordering(134) 00:32:50.840 fused_ordering(135) 00:32:50.840 fused_ordering(136) 00:32:50.840 fused_ordering(137) 00:32:50.840 fused_ordering(138) 00:32:50.840 fused_ordering(139) 00:32:50.840 fused_ordering(140) 00:32:50.840 fused_ordering(141) 00:32:50.840 fused_ordering(142) 00:32:50.840 fused_ordering(143) 00:32:50.840 fused_ordering(144) 00:32:50.840 fused_ordering(145) 00:32:50.840 fused_ordering(146) 00:32:50.840 fused_ordering(147) 00:32:50.840 fused_ordering(148) 00:32:50.840 fused_ordering(149) 00:32:50.840 fused_ordering(150) 00:32:50.840 fused_ordering(151) 00:32:50.840 fused_ordering(152) 00:32:50.840 fused_ordering(153) 00:32:50.840 fused_ordering(154) 00:32:50.840 fused_ordering(155) 00:32:50.840 fused_ordering(156) 00:32:50.840 fused_ordering(157) 00:32:50.840 fused_ordering(158) 00:32:50.840 fused_ordering(159) 00:32:50.840 fused_ordering(160) 00:32:50.840 fused_ordering(161) 00:32:50.840 fused_ordering(162) 00:32:50.840 fused_ordering(163) 00:32:50.840 fused_ordering(164) 00:32:50.840 fused_ordering(165) 00:32:50.840 fused_ordering(166) 00:32:50.840 fused_ordering(167) 00:32:50.840 fused_ordering(168) 00:32:50.840 fused_ordering(169) 00:32:50.840 fused_ordering(170) 00:32:50.840 fused_ordering(171) 00:32:50.840 fused_ordering(172) 00:32:50.840 fused_ordering(173) 00:32:50.840 fused_ordering(174) 00:32:50.840 fused_ordering(175) 00:32:50.840 fused_ordering(176) 00:32:50.840 fused_ordering(177) 00:32:50.840 fused_ordering(178) 00:32:50.840 fused_ordering(179) 00:32:50.840 fused_ordering(180) 00:32:50.840 fused_ordering(181) 00:32:50.840 fused_ordering(182) 00:32:50.840 fused_ordering(183) 00:32:50.840 fused_ordering(184) 00:32:50.840 fused_ordering(185) 00:32:50.840 fused_ordering(186) 00:32:50.840 fused_ordering(187) 00:32:50.840 fused_ordering(188) 00:32:50.840 fused_ordering(189) 00:32:50.840 fused_ordering(190) 00:32:50.840 fused_ordering(191) 00:32:50.840 fused_ordering(192) 00:32:50.840 fused_ordering(193) 00:32:50.840 fused_ordering(194) 00:32:50.840 fused_ordering(195) 00:32:50.840 fused_ordering(196) 00:32:50.840 fused_ordering(197) 00:32:50.840 fused_ordering(198) 00:32:50.840 fused_ordering(199) 00:32:50.840 fused_ordering(200) 00:32:50.840 fused_ordering(201) 00:32:50.840 fused_ordering(202) 00:32:50.840 fused_ordering(203) 00:32:50.840 fused_ordering(204) 00:32:50.840 fused_ordering(205) 00:32:51.407 fused_ordering(206) 00:32:51.407 fused_ordering(207) 00:32:51.407 fused_ordering(208) 00:32:51.407 fused_ordering(209) 00:32:51.407 fused_ordering(210) 00:32:51.407 fused_ordering(211) 00:32:51.407 fused_ordering(212) 00:32:51.407 fused_ordering(213) 00:32:51.407 fused_ordering(214) 00:32:51.407 fused_ordering(215) 00:32:51.407 fused_ordering(216) 00:32:51.407 fused_ordering(217) 00:32:51.407 fused_ordering(218) 00:32:51.407 fused_ordering(219) 00:32:51.407 fused_ordering(220) 00:32:51.407 fused_ordering(221) 00:32:51.407 fused_ordering(222) 00:32:51.407 fused_ordering(223) 00:32:51.407 fused_ordering(224) 00:32:51.407 fused_ordering(225) 00:32:51.407 fused_ordering(226) 00:32:51.407 fused_ordering(227) 00:32:51.407 fused_ordering(228) 00:32:51.407 fused_ordering(229) 00:32:51.407 fused_ordering(230) 00:32:51.407 fused_ordering(231) 00:32:51.407 fused_ordering(232) 00:32:51.407 fused_ordering(233) 00:32:51.407 fused_ordering(234) 00:32:51.407 fused_ordering(235) 00:32:51.407 fused_ordering(236) 00:32:51.407 fused_ordering(237) 00:32:51.407 fused_ordering(238) 00:32:51.407 fused_ordering(239) 00:32:51.407 fused_ordering(240) 00:32:51.407 fused_ordering(241) 00:32:51.407 fused_ordering(242) 00:32:51.407 fused_ordering(243) 00:32:51.407 fused_ordering(244) 00:32:51.407 fused_ordering(245) 00:32:51.407 fused_ordering(246) 00:32:51.407 fused_ordering(247) 00:32:51.407 fused_ordering(248) 00:32:51.407 fused_ordering(249) 00:32:51.407 fused_ordering(250) 00:32:51.407 fused_ordering(251) 00:32:51.407 fused_ordering(252) 00:32:51.407 fused_ordering(253) 00:32:51.407 fused_ordering(254) 00:32:51.407 fused_ordering(255) 00:32:51.407 fused_ordering(256) 00:32:51.407 fused_ordering(257) 00:32:51.407 fused_ordering(258) 00:32:51.407 fused_ordering(259) 00:32:51.407 fused_ordering(260) 00:32:51.407 fused_ordering(261) 00:32:51.407 fused_ordering(262) 00:32:51.407 fused_ordering(263) 00:32:51.407 fused_ordering(264) 00:32:51.407 fused_ordering(265) 00:32:51.407 fused_ordering(266) 00:32:51.407 fused_ordering(267) 00:32:51.407 fused_ordering(268) 00:32:51.407 fused_ordering(269) 00:32:51.407 fused_ordering(270) 00:32:51.407 fused_ordering(271) 00:32:51.407 fused_ordering(272) 00:32:51.407 fused_ordering(273) 00:32:51.407 fused_ordering(274) 00:32:51.407 fused_ordering(275) 00:32:51.407 fused_ordering(276) 00:32:51.407 fused_ordering(277) 00:32:51.407 fused_ordering(278) 00:32:51.407 fused_ordering(279) 00:32:51.407 fused_ordering(280) 00:32:51.407 fused_ordering(281) 00:32:51.407 fused_ordering(282) 00:32:51.407 fused_ordering(283) 00:32:51.407 fused_ordering(284) 00:32:51.407 fused_ordering(285) 00:32:51.407 fused_ordering(286) 00:32:51.407 fused_ordering(287) 00:32:51.407 fused_ordering(288) 00:32:51.407 fused_ordering(289) 00:32:51.407 fused_ordering(290) 00:32:51.407 fused_ordering(291) 00:32:51.407 fused_ordering(292) 00:32:51.407 fused_ordering(293) 00:32:51.407 fused_ordering(294) 00:32:51.407 fused_ordering(295) 00:32:51.407 fused_ordering(296) 00:32:51.407 fused_ordering(297) 00:32:51.407 fused_ordering(298) 00:32:51.407 fused_ordering(299) 00:32:51.407 fused_ordering(300) 00:32:51.407 fused_ordering(301) 00:32:51.407 fused_ordering(302) 00:32:51.407 fused_ordering(303) 00:32:51.407 fused_ordering(304) 00:32:51.407 fused_ordering(305) 00:32:51.407 fused_ordering(306) 00:32:51.407 fused_ordering(307) 00:32:51.407 fused_ordering(308) 00:32:51.407 fused_ordering(309) 00:32:51.407 fused_ordering(310) 00:32:51.407 fused_ordering(311) 00:32:51.408 fused_ordering(312) 00:32:51.408 fused_ordering(313) 00:32:51.408 fused_ordering(314) 00:32:51.408 fused_ordering(315) 00:32:51.408 fused_ordering(316) 00:32:51.408 fused_ordering(317) 00:32:51.408 fused_ordering(318) 00:32:51.408 fused_ordering(319) 00:32:51.408 fused_ordering(320) 00:32:51.408 fused_ordering(321) 00:32:51.408 fused_ordering(322) 00:32:51.408 fused_ordering(323) 00:32:51.408 fused_ordering(324) 00:32:51.408 fused_ordering(325) 00:32:51.408 fused_ordering(326) 00:32:51.408 fused_ordering(327) 00:32:51.408 fused_ordering(328) 00:32:51.408 fused_ordering(329) 00:32:51.408 fused_ordering(330) 00:32:51.408 fused_ordering(331) 00:32:51.408 fused_ordering(332) 00:32:51.408 fused_ordering(333) 00:32:51.408 fused_ordering(334) 00:32:51.408 fused_ordering(335) 00:32:51.408 fused_ordering(336) 00:32:51.408 fused_ordering(337) 00:32:51.408 fused_ordering(338) 00:32:51.408 fused_ordering(339) 00:32:51.408 fused_ordering(340) 00:32:51.408 fused_ordering(341) 00:32:51.408 fused_ordering(342) 00:32:51.408 fused_ordering(343) 00:32:51.408 fused_ordering(344) 00:32:51.408 fused_ordering(345) 00:32:51.408 fused_ordering(346) 00:32:51.408 fused_ordering(347) 00:32:51.408 fused_ordering(348) 00:32:51.408 fused_ordering(349) 00:32:51.408 fused_ordering(350) 00:32:51.408 fused_ordering(351) 00:32:51.408 fused_ordering(352) 00:32:51.408 fused_ordering(353) 00:32:51.408 fused_ordering(354) 00:32:51.408 fused_ordering(355) 00:32:51.408 fused_ordering(356) 00:32:51.408 fused_ordering(357) 00:32:51.408 fused_ordering(358) 00:32:51.408 fused_ordering(359) 00:32:51.408 fused_ordering(360) 00:32:51.408 fused_ordering(361) 00:32:51.408 fused_ordering(362) 00:32:51.408 fused_ordering(363) 00:32:51.408 fused_ordering(364) 00:32:51.408 fused_ordering(365) 00:32:51.408 fused_ordering(366) 00:32:51.408 fused_ordering(367) 00:32:51.408 fused_ordering(368) 00:32:51.408 fused_ordering(369) 00:32:51.408 fused_ordering(370) 00:32:51.408 fused_ordering(371) 00:32:51.408 fused_ordering(372) 00:32:51.408 fused_ordering(373) 00:32:51.408 fused_ordering(374) 00:32:51.408 fused_ordering(375) 00:32:51.408 fused_ordering(376) 00:32:51.408 fused_ordering(377) 00:32:51.408 fused_ordering(378) 00:32:51.408 fused_ordering(379) 00:32:51.408 fused_ordering(380) 00:32:51.408 fused_ordering(381) 00:32:51.408 fused_ordering(382) 00:32:51.408 fused_ordering(383) 00:32:51.408 fused_ordering(384) 00:32:51.408 fused_ordering(385) 00:32:51.408 fused_ordering(386) 00:32:51.408 fused_ordering(387) 00:32:51.408 fused_ordering(388) 00:32:51.408 fused_ordering(389) 00:32:51.408 fused_ordering(390) 00:32:51.408 fused_ordering(391) 00:32:51.408 fused_ordering(392) 00:32:51.408 fused_ordering(393) 00:32:51.408 fused_ordering(394) 00:32:51.408 fused_ordering(395) 00:32:51.408 fused_ordering(396) 00:32:51.408 fused_ordering(397) 00:32:51.408 fused_ordering(398) 00:32:51.408 fused_ordering(399) 00:32:51.408 fused_ordering(400) 00:32:51.408 fused_ordering(401) 00:32:51.408 fused_ordering(402) 00:32:51.408 fused_ordering(403) 00:32:51.408 fused_ordering(404) 00:32:51.408 fused_ordering(405) 00:32:51.408 fused_ordering(406) 00:32:51.408 fused_ordering(407) 00:32:51.408 fused_ordering(408) 00:32:51.408 fused_ordering(409) 00:32:51.408 fused_ordering(410) 00:32:51.667 fused_ordering(411) 00:32:51.667 fused_ordering(412) 00:32:51.667 fused_ordering(413) 00:32:51.667 fused_ordering(414) 00:32:51.667 fused_ordering(415) 00:32:51.667 fused_ordering(416) 00:32:51.667 fused_ordering(417) 00:32:51.667 fused_ordering(418) 00:32:51.667 fused_ordering(419) 00:32:51.667 fused_ordering(420) 00:32:51.667 fused_ordering(421) 00:32:51.667 fused_ordering(422) 00:32:51.667 fused_ordering(423) 00:32:51.667 fused_ordering(424) 00:32:51.667 fused_ordering(425) 00:32:51.667 fused_ordering(426) 00:32:51.667 fused_ordering(427) 00:32:51.667 fused_ordering(428) 00:32:51.667 fused_ordering(429) 00:32:51.667 fused_ordering(430) 00:32:51.667 fused_ordering(431) 00:32:51.667 fused_ordering(432) 00:32:51.667 fused_ordering(433) 00:32:51.667 fused_ordering(434) 00:32:51.667 fused_ordering(435) 00:32:51.667 fused_ordering(436) 00:32:51.667 fused_ordering(437) 00:32:51.667 fused_ordering(438) 00:32:51.667 fused_ordering(439) 00:32:51.667 fused_ordering(440) 00:32:51.667 fused_ordering(441) 00:32:51.667 fused_ordering(442) 00:32:51.667 fused_ordering(443) 00:32:51.667 fused_ordering(444) 00:32:51.667 fused_ordering(445) 00:32:51.667 fused_ordering(446) 00:32:51.667 fused_ordering(447) 00:32:51.667 fused_ordering(448) 00:32:51.667 fused_ordering(449) 00:32:51.667 fused_ordering(450) 00:32:51.667 fused_ordering(451) 00:32:51.667 fused_ordering(452) 00:32:51.667 fused_ordering(453) 00:32:51.667 fused_ordering(454) 00:32:51.667 fused_ordering(455) 00:32:51.667 fused_ordering(456) 00:32:51.667 fused_ordering(457) 00:32:51.667 fused_ordering(458) 00:32:51.667 fused_ordering(459) 00:32:51.667 fused_ordering(460) 00:32:51.667 fused_ordering(461) 00:32:51.667 fused_ordering(462) 00:32:51.667 fused_ordering(463) 00:32:51.667 fused_ordering(464) 00:32:51.667 fused_ordering(465) 00:32:51.668 fused_ordering(466) 00:32:51.668 fused_ordering(467) 00:32:51.668 fused_ordering(468) 00:32:51.668 fused_ordering(469) 00:32:51.668 fused_ordering(470) 00:32:51.668 fused_ordering(471) 00:32:51.668 fused_ordering(472) 00:32:51.668 fused_ordering(473) 00:32:51.668 fused_ordering(474) 00:32:51.668 fused_ordering(475) 00:32:51.668 fused_ordering(476) 00:32:51.668 fused_ordering(477) 00:32:51.668 fused_ordering(478) 00:32:51.668 fused_ordering(479) 00:32:51.668 fused_ordering(480) 00:32:51.668 fused_ordering(481) 00:32:51.668 fused_ordering(482) 00:32:51.668 fused_ordering(483) 00:32:51.668 fused_ordering(484) 00:32:51.668 fused_ordering(485) 00:32:51.668 fused_ordering(486) 00:32:51.668 fused_ordering(487) 00:32:51.668 fused_ordering(488) 00:32:51.668 fused_ordering(489) 00:32:51.668 fused_ordering(490) 00:32:51.668 fused_ordering(491) 00:32:51.668 fused_ordering(492) 00:32:51.668 fused_ordering(493) 00:32:51.668 fused_ordering(494) 00:32:51.668 fused_ordering(495) 00:32:51.668 fused_ordering(496) 00:32:51.668 fused_ordering(497) 00:32:51.668 fused_ordering(498) 00:32:51.668 fused_ordering(499) 00:32:51.668 fused_ordering(500) 00:32:51.668 fused_ordering(501) 00:32:51.668 fused_ordering(502) 00:32:51.668 fused_ordering(503) 00:32:51.668 fused_ordering(504) 00:32:51.668 fused_ordering(505) 00:32:51.668 fused_ordering(506) 00:32:51.668 fused_ordering(507) 00:32:51.668 fused_ordering(508) 00:32:51.668 fused_ordering(509) 00:32:51.668 fused_ordering(510) 00:32:51.668 fused_ordering(511) 00:32:51.668 fused_ordering(512) 00:32:51.668 fused_ordering(513) 00:32:51.668 fused_ordering(514) 00:32:51.668 fused_ordering(515) 00:32:51.668 fused_ordering(516) 00:32:51.668 fused_ordering(517) 00:32:51.668 fused_ordering(518) 00:32:51.668 fused_ordering(519) 00:32:51.668 fused_ordering(520) 00:32:51.668 fused_ordering(521) 00:32:51.668 fused_ordering(522) 00:32:51.668 fused_ordering(523) 00:32:51.668 fused_ordering(524) 00:32:51.668 fused_ordering(525) 00:32:51.668 fused_ordering(526) 00:32:51.668 fused_ordering(527) 00:32:51.668 fused_ordering(528) 00:32:51.668 fused_ordering(529) 00:32:51.668 fused_ordering(530) 00:32:51.668 fused_ordering(531) 00:32:51.668 fused_ordering(532) 00:32:51.668 fused_ordering(533) 00:32:51.668 fused_ordering(534) 00:32:51.668 fused_ordering(535) 00:32:51.668 fused_ordering(536) 00:32:51.668 fused_ordering(537) 00:32:51.668 fused_ordering(538) 00:32:51.668 fused_ordering(539) 00:32:51.668 fused_ordering(540) 00:32:51.668 fused_ordering(541) 00:32:51.668 fused_ordering(542) 00:32:51.668 fused_ordering(543) 00:32:51.668 fused_ordering(544) 00:32:51.668 fused_ordering(545) 00:32:51.668 fused_ordering(546) 00:32:51.668 fused_ordering(547) 00:32:51.668 fused_ordering(548) 00:32:51.668 fused_ordering(549) 00:32:51.668 fused_ordering(550) 00:32:51.668 fused_ordering(551) 00:32:51.668 fused_ordering(552) 00:32:51.668 fused_ordering(553) 00:32:51.668 fused_ordering(554) 00:32:51.668 fused_ordering(555) 00:32:51.668 fused_ordering(556) 00:32:51.668 fused_ordering(557) 00:32:51.668 fused_ordering(558) 00:32:51.668 fused_ordering(559) 00:32:51.668 fused_ordering(560) 00:32:51.668 fused_ordering(561) 00:32:51.668 fused_ordering(562) 00:32:51.668 fused_ordering(563) 00:32:51.668 fused_ordering(564) 00:32:51.668 fused_ordering(565) 00:32:51.668 fused_ordering(566) 00:32:51.668 fused_ordering(567) 00:32:51.668 fused_ordering(568) 00:32:51.668 fused_ordering(569) 00:32:51.668 fused_ordering(570) 00:32:51.668 fused_ordering(571) 00:32:51.668 fused_ordering(572) 00:32:51.668 fused_ordering(573) 00:32:51.668 fused_ordering(574) 00:32:51.668 fused_ordering(575) 00:32:51.668 fused_ordering(576) 00:32:51.668 fused_ordering(577) 00:32:51.668 fused_ordering(578) 00:32:51.668 fused_ordering(579) 00:32:51.668 fused_ordering(580) 00:32:51.668 fused_ordering(581) 00:32:51.668 fused_ordering(582) 00:32:51.668 fused_ordering(583) 00:32:51.668 fused_ordering(584) 00:32:51.668 fused_ordering(585) 00:32:51.668 fused_ordering(586) 00:32:51.668 fused_ordering(587) 00:32:51.668 fused_ordering(588) 00:32:51.668 fused_ordering(589) 00:32:51.668 fused_ordering(590) 00:32:51.668 fused_ordering(591) 00:32:51.668 fused_ordering(592) 00:32:51.668 fused_ordering(593) 00:32:51.668 fused_ordering(594) 00:32:51.668 fused_ordering(595) 00:32:51.668 fused_ordering(596) 00:32:51.668 fused_ordering(597) 00:32:51.668 fused_ordering(598) 00:32:51.668 fused_ordering(599) 00:32:51.668 fused_ordering(600) 00:32:51.668 fused_ordering(601) 00:32:51.668 fused_ordering(602) 00:32:51.668 fused_ordering(603) 00:32:51.668 fused_ordering(604) 00:32:51.668 fused_ordering(605) 00:32:51.668 fused_ordering(606) 00:32:51.668 fused_ordering(607) 00:32:51.668 fused_ordering(608) 00:32:51.668 fused_ordering(609) 00:32:51.668 fused_ordering(610) 00:32:51.668 fused_ordering(611) 00:32:51.668 fused_ordering(612) 00:32:51.668 fused_ordering(613) 00:32:51.668 fused_ordering(614) 00:32:51.668 fused_ordering(615) 00:32:52.235 fused_ordering(616) 00:32:52.235 fused_ordering(617) 00:32:52.235 fused_ordering(618) 00:32:52.235 fused_ordering(619) 00:32:52.235 fused_ordering(620) 00:32:52.235 fused_ordering(621) 00:32:52.235 fused_ordering(622) 00:32:52.235 fused_ordering(623) 00:32:52.235 fused_ordering(624) 00:32:52.235 fused_ordering(625) 00:32:52.235 fused_ordering(626) 00:32:52.235 fused_ordering(627) 00:32:52.235 fused_ordering(628) 00:32:52.235 fused_ordering(629) 00:32:52.236 fused_ordering(630) 00:32:52.236 fused_ordering(631) 00:32:52.236 fused_ordering(632) 00:32:52.236 fused_ordering(633) 00:32:52.236 fused_ordering(634) 00:32:52.236 fused_ordering(635) 00:32:52.236 fused_ordering(636) 00:32:52.236 fused_ordering(637) 00:32:52.236 fused_ordering(638) 00:32:52.236 fused_ordering(639) 00:32:52.236 fused_ordering(640) 00:32:52.236 fused_ordering(641) 00:32:52.236 fused_ordering(642) 00:32:52.236 fused_ordering(643) 00:32:52.236 fused_ordering(644) 00:32:52.236 fused_ordering(645) 00:32:52.236 fused_ordering(646) 00:32:52.236 fused_ordering(647) 00:32:52.236 fused_ordering(648) 00:32:52.236 fused_ordering(649) 00:32:52.236 fused_ordering(650) 00:32:52.236 fused_ordering(651) 00:32:52.236 fused_ordering(652) 00:32:52.236 fused_ordering(653) 00:32:52.236 fused_ordering(654) 00:32:52.236 fused_ordering(655) 00:32:52.236 fused_ordering(656) 00:32:52.236 fused_ordering(657) 00:32:52.236 fused_ordering(658) 00:32:52.236 fused_ordering(659) 00:32:52.236 fused_ordering(660) 00:32:52.236 fused_ordering(661) 00:32:52.236 fused_ordering(662) 00:32:52.236 fused_ordering(663) 00:32:52.236 fused_ordering(664) 00:32:52.236 fused_ordering(665) 00:32:52.236 fused_ordering(666) 00:32:52.236 fused_ordering(667) 00:32:52.236 fused_ordering(668) 00:32:52.236 fused_ordering(669) 00:32:52.236 fused_ordering(670) 00:32:52.236 fused_ordering(671) 00:32:52.236 fused_ordering(672) 00:32:52.236 fused_ordering(673) 00:32:52.236 fused_ordering(674) 00:32:52.236 fused_ordering(675) 00:32:52.236 fused_ordering(676) 00:32:52.236 fused_ordering(677) 00:32:52.236 fused_ordering(678) 00:32:52.236 fused_ordering(679) 00:32:52.236 fused_ordering(680) 00:32:52.236 fused_ordering(681) 00:32:52.236 fused_ordering(682) 00:32:52.236 fused_ordering(683) 00:32:52.236 fused_ordering(684) 00:32:52.236 fused_ordering(685) 00:32:52.236 fused_ordering(686) 00:32:52.236 fused_ordering(687) 00:32:52.236 fused_ordering(688) 00:32:52.236 fused_ordering(689) 00:32:52.236 fused_ordering(690) 00:32:52.236 fused_ordering(691) 00:32:52.236 fused_ordering(692) 00:32:52.236 fused_ordering(693) 00:32:52.236 fused_ordering(694) 00:32:52.236 fused_ordering(695) 00:32:52.236 fused_ordering(696) 00:32:52.236 fused_ordering(697) 00:32:52.236 fused_ordering(698) 00:32:52.236 fused_ordering(699) 00:32:52.236 fused_ordering(700) 00:32:52.236 fused_ordering(701) 00:32:52.236 fused_ordering(702) 00:32:52.236 fused_ordering(703) 00:32:52.236 fused_ordering(704) 00:32:52.236 fused_ordering(705) 00:32:52.236 fused_ordering(706) 00:32:52.236 fused_ordering(707) 00:32:52.236 fused_ordering(708) 00:32:52.236 fused_ordering(709) 00:32:52.236 fused_ordering(710) 00:32:52.236 fused_ordering(711) 00:32:52.236 fused_ordering(712) 00:32:52.236 fused_ordering(713) 00:32:52.236 fused_ordering(714) 00:32:52.236 fused_ordering(715) 00:32:52.236 fused_ordering(716) 00:32:52.236 fused_ordering(717) 00:32:52.236 fused_ordering(718) 00:32:52.236 fused_ordering(719) 00:32:52.236 fused_ordering(720) 00:32:52.236 fused_ordering(721) 00:32:52.236 fused_ordering(722) 00:32:52.236 fused_ordering(723) 00:32:52.236 fused_ordering(724) 00:32:52.236 fused_ordering(725) 00:32:52.236 fused_ordering(726) 00:32:52.236 fused_ordering(727) 00:32:52.236 fused_ordering(728) 00:32:52.236 fused_ordering(729) 00:32:52.236 fused_ordering(730) 00:32:52.236 fused_ordering(731) 00:32:52.236 fused_ordering(732) 00:32:52.236 fused_ordering(733) 00:32:52.236 fused_ordering(734) 00:32:52.236 fused_ordering(735) 00:32:52.236 fused_ordering(736) 00:32:52.236 fused_ordering(737) 00:32:52.236 fused_ordering(738) 00:32:52.236 fused_ordering(739) 00:32:52.236 fused_ordering(740) 00:32:52.236 fused_ordering(741) 00:32:52.236 fused_ordering(742) 00:32:52.236 fused_ordering(743) 00:32:52.236 fused_ordering(744) 00:32:52.236 fused_ordering(745) 00:32:52.236 fused_ordering(746) 00:32:52.236 fused_ordering(747) 00:32:52.236 fused_ordering(748) 00:32:52.236 fused_ordering(749) 00:32:52.236 fused_ordering(750) 00:32:52.236 fused_ordering(751) 00:32:52.236 fused_ordering(752) 00:32:52.236 fused_ordering(753) 00:32:52.236 fused_ordering(754) 00:32:52.236 fused_ordering(755) 00:32:52.236 fused_ordering(756) 00:32:52.236 fused_ordering(757) 00:32:52.236 fused_ordering(758) 00:32:52.236 fused_ordering(759) 00:32:52.236 fused_ordering(760) 00:32:52.236 fused_ordering(761) 00:32:52.236 fused_ordering(762) 00:32:52.236 fused_ordering(763) 00:32:52.236 fused_ordering(764) 00:32:52.236 fused_ordering(765) 00:32:52.236 fused_ordering(766) 00:32:52.236 fused_ordering(767) 00:32:52.236 fused_ordering(768) 00:32:52.236 fused_ordering(769) 00:32:52.236 fused_ordering(770) 00:32:52.236 fused_ordering(771) 00:32:52.236 fused_ordering(772) 00:32:52.236 fused_ordering(773) 00:32:52.236 fused_ordering(774) 00:32:52.236 fused_ordering(775) 00:32:52.236 fused_ordering(776) 00:32:52.236 fused_ordering(777) 00:32:52.236 fused_ordering(778) 00:32:52.236 fused_ordering(779) 00:32:52.236 fused_ordering(780) 00:32:52.236 fused_ordering(781) 00:32:52.236 fused_ordering(782) 00:32:52.236 fused_ordering(783) 00:32:52.236 fused_ordering(784) 00:32:52.236 fused_ordering(785) 00:32:52.236 fused_ordering(786) 00:32:52.236 fused_ordering(787) 00:32:52.236 fused_ordering(788) 00:32:52.236 fused_ordering(789) 00:32:52.236 fused_ordering(790) 00:32:52.236 fused_ordering(791) 00:32:52.236 fused_ordering(792) 00:32:52.236 fused_ordering(793) 00:32:52.236 fused_ordering(794) 00:32:52.236 fused_ordering(795) 00:32:52.236 fused_ordering(796) 00:32:52.236 fused_ordering(797) 00:32:52.236 fused_ordering(798) 00:32:52.236 fused_ordering(799) 00:32:52.236 fused_ordering(800) 00:32:52.236 fused_ordering(801) 00:32:52.236 fused_ordering(802) 00:32:52.236 fused_ordering(803) 00:32:52.236 fused_ordering(804) 00:32:52.236 fused_ordering(805) 00:32:52.236 fused_ordering(806) 00:32:52.236 fused_ordering(807) 00:32:52.236 fused_ordering(808) 00:32:52.236 fused_ordering(809) 00:32:52.236 fused_ordering(810) 00:32:52.236 fused_ordering(811) 00:32:52.236 fused_ordering(812) 00:32:52.236 fused_ordering(813) 00:32:52.236 fused_ordering(814) 00:32:52.236 fused_ordering(815) 00:32:52.236 fused_ordering(816) 00:32:52.236 fused_ordering(817) 00:32:52.236 fused_ordering(818) 00:32:52.236 fused_ordering(819) 00:32:52.236 fused_ordering(820) 00:32:52.803 fused_ordering(821) 00:32:52.803 fused_ordering(822) 00:32:52.803 fused_ordering(823) 00:32:52.803 fused_ordering(824) 00:32:52.803 fused_ordering(825) 00:32:52.803 fused_ordering(826) 00:32:52.803 fused_ordering(827) 00:32:52.803 fused_ordering(828) 00:32:52.803 fused_ordering(829) 00:32:52.803 fused_ordering(830) 00:32:52.803 fused_ordering(831) 00:32:52.803 fused_ordering(832) 00:32:52.803 fused_ordering(833) 00:32:52.803 fused_ordering(834) 00:32:52.803 fused_ordering(835) 00:32:52.803 fused_ordering(836) 00:32:52.803 fused_ordering(837) 00:32:52.803 fused_ordering(838) 00:32:52.803 fused_ordering(839) 00:32:52.803 fused_ordering(840) 00:32:52.803 fused_ordering(841) 00:32:52.803 fused_ordering(842) 00:32:52.803 fused_ordering(843) 00:32:52.803 fused_ordering(844) 00:32:52.803 fused_ordering(845) 00:32:52.803 fused_ordering(846) 00:32:52.803 fused_ordering(847) 00:32:52.803 fused_ordering(848) 00:32:52.803 fused_ordering(849) 00:32:52.803 fused_ordering(850) 00:32:52.803 fused_ordering(851) 00:32:52.803 fused_ordering(852) 00:32:52.803 fused_ordering(853) 00:32:52.803 fused_ordering(854) 00:32:52.803 fused_ordering(855) 00:32:52.803 fused_ordering(856) 00:32:52.803 fused_ordering(857) 00:32:52.803 fused_ordering(858) 00:32:52.803 fused_ordering(859) 00:32:52.803 fused_ordering(860) 00:32:52.803 fused_ordering(861) 00:32:52.803 fused_ordering(862) 00:32:52.803 fused_ordering(863) 00:32:52.803 fused_ordering(864) 00:32:52.803 fused_ordering(865) 00:32:52.803 fused_ordering(866) 00:32:52.803 fused_ordering(867) 00:32:52.803 fused_ordering(868) 00:32:52.803 fused_ordering(869) 00:32:52.803 fused_ordering(870) 00:32:52.803 fused_ordering(871) 00:32:52.803 fused_ordering(872) 00:32:52.803 fused_ordering(873) 00:32:52.803 fused_ordering(874) 00:32:52.803 fused_ordering(875) 00:32:52.803 fused_ordering(876) 00:32:52.803 fused_ordering(877) 00:32:52.803 fused_ordering(878) 00:32:52.803 fused_ordering(879) 00:32:52.803 fused_ordering(880) 00:32:52.803 fused_ordering(881) 00:32:52.803 fused_ordering(882) 00:32:52.803 fused_ordering(883) 00:32:52.803 fused_ordering(884) 00:32:52.803 fused_ordering(885) 00:32:52.803 fused_ordering(886) 00:32:52.803 fused_ordering(887) 00:32:52.803 fused_ordering(888) 00:32:52.803 fused_ordering(889) 00:32:52.803 fused_ordering(890) 00:32:52.803 fused_ordering(891) 00:32:52.803 fused_ordering(892) 00:32:52.803 fused_ordering(893) 00:32:52.803 fused_ordering(894) 00:32:52.803 fused_ordering(895) 00:32:52.803 fused_ordering(896) 00:32:52.803 fused_ordering(897) 00:32:52.803 fused_ordering(898) 00:32:52.803 fused_ordering(899) 00:32:52.803 fused_ordering(900) 00:32:52.803 fused_ordering(901) 00:32:52.803 fused_ordering(902) 00:32:52.803 fused_ordering(903) 00:32:52.803 fused_ordering(904) 00:32:52.803 fused_ordering(905) 00:32:52.803 fused_ordering(906) 00:32:52.803 fused_ordering(907) 00:32:52.803 fused_ordering(908) 00:32:52.803 fused_ordering(909) 00:32:52.803 fused_ordering(910) 00:32:52.803 fused_ordering(911) 00:32:52.803 fused_ordering(912) 00:32:52.803 fused_ordering(913) 00:32:52.803 fused_ordering(914) 00:32:52.803 fused_ordering(915) 00:32:52.803 fused_ordering(916) 00:32:52.803 fused_ordering(917) 00:32:52.803 fused_ordering(918) 00:32:52.803 fused_ordering(919) 00:32:52.803 fused_ordering(920) 00:32:52.803 fused_ordering(921) 00:32:52.803 fused_ordering(922) 00:32:52.803 fused_ordering(923) 00:32:52.803 fused_ordering(924) 00:32:52.803 fused_ordering(925) 00:32:52.803 fused_ordering(926) 00:32:52.803 fused_ordering(927) 00:32:52.803 fused_ordering(928) 00:32:52.803 fused_ordering(929) 00:32:52.803 fused_ordering(930) 00:32:52.803 fused_ordering(931) 00:32:52.803 fused_ordering(932) 00:32:52.803 fused_ordering(933) 00:32:52.803 fused_ordering(934) 00:32:52.803 fused_ordering(935) 00:32:52.803 fused_ordering(936) 00:32:52.803 fused_ordering(937) 00:32:52.803 fused_ordering(938) 00:32:52.803 fused_ordering(939) 00:32:52.803 fused_ordering(940) 00:32:52.803 fused_ordering(941) 00:32:52.803 fused_ordering(942) 00:32:52.803 fused_ordering(943) 00:32:52.803 fused_ordering(944) 00:32:52.803 fused_ordering(945) 00:32:52.803 fused_ordering(946) 00:32:52.803 fused_ordering(947) 00:32:52.803 fused_ordering(948) 00:32:52.803 fused_ordering(949) 00:32:52.803 fused_ordering(950) 00:32:52.803 fused_ordering(951) 00:32:52.803 fused_ordering(952) 00:32:52.804 fused_ordering(953) 00:32:52.804 fused_ordering(954) 00:32:52.804 fused_ordering(955) 00:32:52.804 fused_ordering(956) 00:32:52.804 fused_ordering(957) 00:32:52.804 fused_ordering(958) 00:32:52.804 fused_ordering(959) 00:32:52.804 fused_ordering(960) 00:32:52.804 fused_ordering(961) 00:32:52.804 fused_ordering(962) 00:32:52.804 fused_ordering(963) 00:32:52.804 fused_ordering(964) 00:32:52.804 fused_ordering(965) 00:32:52.804 fused_ordering(966) 00:32:52.804 fused_ordering(967) 00:32:52.804 fused_ordering(968) 00:32:52.804 fused_ordering(969) 00:32:52.804 fused_ordering(970) 00:32:52.804 fused_ordering(971) 00:32:52.804 fused_ordering(972) 00:32:52.804 fused_ordering(973) 00:32:52.804 fused_ordering(974) 00:32:52.804 fused_ordering(975) 00:32:52.804 fused_ordering(976) 00:32:52.804 fused_ordering(977) 00:32:52.804 fused_ordering(978) 00:32:52.804 fused_ordering(979) 00:32:52.804 fused_ordering(980) 00:32:52.804 fused_ordering(981) 00:32:52.804 fused_ordering(982) 00:32:52.804 fused_ordering(983) 00:32:52.804 fused_ordering(984) 00:32:52.804 fused_ordering(985) 00:32:52.804 fused_ordering(986) 00:32:52.804 fused_ordering(987) 00:32:52.804 fused_ordering(988) 00:32:52.804 fused_ordering(989) 00:32:52.804 fused_ordering(990) 00:32:52.804 fused_ordering(991) 00:32:52.804 fused_ordering(992) 00:32:52.804 fused_ordering(993) 00:32:52.804 fused_ordering(994) 00:32:52.804 fused_ordering(995) 00:32:52.804 fused_ordering(996) 00:32:52.804 fused_ordering(997) 00:32:52.804 fused_ordering(998) 00:32:52.804 fused_ordering(999) 00:32:52.804 fused_ordering(1000) 00:32:52.804 fused_ordering(1001) 00:32:52.804 fused_ordering(1002) 00:32:52.804 fused_ordering(1003) 00:32:52.804 fused_ordering(1004) 00:32:52.804 fused_ordering(1005) 00:32:52.804 fused_ordering(1006) 00:32:52.804 fused_ordering(1007) 00:32:52.804 fused_ordering(1008) 00:32:52.804 fused_ordering(1009) 00:32:52.804 fused_ordering(1010) 00:32:52.804 fused_ordering(1011) 00:32:52.804 fused_ordering(1012) 00:32:52.804 fused_ordering(1013) 00:32:52.804 fused_ordering(1014) 00:32:52.804 fused_ordering(1015) 00:32:52.804 fused_ordering(1016) 00:32:52.804 fused_ordering(1017) 00:32:52.804 fused_ordering(1018) 00:32:52.804 fused_ordering(1019) 00:32:52.804 fused_ordering(1020) 00:32:52.804 fused_ordering(1021) 00:32:52.804 fused_ordering(1022) 00:32:52.804 fused_ordering(1023) 00:32:52.804 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:32:52.804 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:32:52.804 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:52.804 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:32:52.804 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:52.804 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:32:52.804 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:52.804 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:52.804 rmmod nvme_tcp 00:32:52.804 rmmod nvme_fabrics 00:32:52.804 rmmod nvme_keyring 00:32:52.804 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:52.804 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:32:52.804 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:32:52.804 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@515 -- # '[' -n 75262 ']' 00:32:52.804 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # killprocess 75262 00:32:52.804 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 75262 ']' 00:32:52.804 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 75262 00:32:52.804 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:32:52.804 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:52.804 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75262 00:32:52.804 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:52.804 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:52.804 killing process with pid 75262 00:32:52.804 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75262' 00:32:52.804 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 75262 00:32:52.804 20:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 75262 00:32:53.063 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:53.063 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:53.063 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:53.063 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:32:53.063 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-save 00:32:53.063 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:53.063 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-restore 00:32:53.063 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:53.063 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:32:53.063 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:32:53.063 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:32:53.063 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:32:53.063 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:32:53.063 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:32:53.063 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:32:53.063 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:32:53.063 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:32:53.063 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:32:53.063 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:32:53.063 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:32:53.063 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:53.322 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:53.322 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@246 -- # remove_spdk_ns 00:32:53.322 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:53.322 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:53.322 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:53.322 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@300 -- # return 0 00:32:53.322 00:32:53.322 real 0m3.845s 00:32:53.322 user 0m4.386s 00:32:53.322 sys 0m1.365s 00:32:53.322 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:53.322 ************************************ 00:32:53.322 END TEST nvmf_fused_ordering 00:32:53.322 ************************************ 00:32:53.322 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:32:53.322 20:31:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:32:53.322 20:31:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:53.322 20:31:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:53.322 20:31:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:32:53.322 ************************************ 00:32:53.322 START TEST nvmf_ns_masking 00:32:53.322 ************************************ 00:32:53.322 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:32:53.322 * Looking for test storage... 00:32:53.322 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:32:53.322 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:53.322 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lcov --version 00:32:53.322 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:53.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.582 --rc genhtml_branch_coverage=1 00:32:53.582 --rc genhtml_function_coverage=1 00:32:53.582 --rc genhtml_legend=1 00:32:53.582 --rc geninfo_all_blocks=1 00:32:53.582 --rc geninfo_unexecuted_blocks=1 00:32:53.582 00:32:53.582 ' 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:53.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.582 --rc genhtml_branch_coverage=1 00:32:53.582 --rc genhtml_function_coverage=1 00:32:53.582 --rc genhtml_legend=1 00:32:53.582 --rc geninfo_all_blocks=1 00:32:53.582 --rc geninfo_unexecuted_blocks=1 00:32:53.582 00:32:53.582 ' 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:53.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.582 --rc genhtml_branch_coverage=1 00:32:53.582 --rc genhtml_function_coverage=1 00:32:53.582 --rc genhtml_legend=1 00:32:53.582 --rc geninfo_all_blocks=1 00:32:53.582 --rc geninfo_unexecuted_blocks=1 00:32:53.582 00:32:53.582 ' 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:53.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.582 --rc genhtml_branch_coverage=1 00:32:53.582 --rc genhtml_function_coverage=1 00:32:53.582 --rc genhtml_legend=1 00:32:53.582 --rc geninfo_all_blocks=1 00:32:53.582 --rc geninfo_unexecuted_blocks=1 00:32:53.582 00:32:53.582 ' 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:32:53.582 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:53.583 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=530951b7-4a9e-4e71-8a45-6989aa1a796c 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=2eda081c-f972-4a60-9d35-584cbee8e0b4 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=1b9c96ff-c6e3-4ca5-8a68-4be2db1941d4 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@458 -- # nvmf_veth_init 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:32:53.583 Cannot find device "nvmf_init_br" 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:32:53.583 Cannot find device "nvmf_init_br2" 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:32:53.583 Cannot find device "nvmf_tgt_br" 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # true 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:32:53.583 Cannot find device "nvmf_tgt_br2" 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # true 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:32:53.583 Cannot find device "nvmf_init_br" 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # true 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:32:53.583 Cannot find device "nvmf_init_br2" 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # true 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:32:53.583 Cannot find device "nvmf_tgt_br" 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # true 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:32:53.583 Cannot find device "nvmf_tgt_br2" 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # true 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:32:53.583 Cannot find device "nvmf_br" 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # true 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:32:53.583 Cannot find device "nvmf_init_if" 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # true 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:32:53.583 Cannot find device "nvmf_init_if2" 00:32:53.583 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # true 00:32:53.584 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:53.584 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:53.584 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # true 00:32:53.584 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:53.584 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:53.584 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # true 00:32:53.584 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:32:53.584 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:53.584 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:32:53.584 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:53.584 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:53.842 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:53.842 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:53.842 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:53.842 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:32:53.842 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:32:53.842 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:32:53.842 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:32:53.842 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:32:53.842 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:32:53.842 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:32:53.842 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:32:53.842 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:32:53.842 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:53.842 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:53.842 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:53.842 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:32:53.842 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:32:53.842 20:31:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:32:53.842 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:32:53.842 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:53.842 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:53.842 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:53.842 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:32:53.842 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:32:53.842 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:32:53.842 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:53.842 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:32:53.842 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:32:53.842 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:53.842 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:32:53.842 00:32:53.842 --- 10.0.0.3 ping statistics --- 00:32:53.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:53.842 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:32:53.843 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:32:53.843 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:32:53.843 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:32:53.843 00:32:53.843 --- 10.0.0.4 ping statistics --- 00:32:53.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:53.843 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:32:53.843 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:53.843 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:53.843 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:32:53.843 00:32:53.843 --- 10.0.0.1 ping statistics --- 00:32:53.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:53.843 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:32:53.843 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:32:53.843 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:53.843 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:32:53.843 00:32:53.843 --- 10.0.0.2 ping statistics --- 00:32:53.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:53.843 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:32:53.843 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:53.843 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@459 -- # return 0 00:32:53.843 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:53.843 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:53.843 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:53.843 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:53.843 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:53.843 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:53.843 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:54.101 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:32:54.101 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:54.101 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:54.101 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:32:54.101 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # nvmfpid=75541 00:32:54.101 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:32:54.101 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # waitforlisten 75541 00:32:54.101 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 75541 ']' 00:32:54.101 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:54.101 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:54.101 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:54.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:54.101 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:54.101 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:32:54.101 [2024-10-01 20:31:49.202691] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:32:54.101 [2024-10-01 20:31:49.202832] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:54.101 [2024-10-01 20:31:49.349082] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:54.359 [2024-10-01 20:31:49.418916] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:54.359 [2024-10-01 20:31:49.419200] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:54.359 [2024-10-01 20:31:49.419314] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:54.359 [2024-10-01 20:31:49.419413] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:54.359 [2024-10-01 20:31:49.419523] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:54.359 [2024-10-01 20:31:49.419637] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:54.359 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:54.359 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:32:54.360 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:54.360 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:54.360 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:32:54.360 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:54.360 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:54.617 [2024-10-01 20:31:49.844349] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:54.617 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:32:54.617 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:32:54.617 20:31:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:32:55.182 Malloc1 00:32:55.182 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:32:55.439 Malloc2 00:32:55.440 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:55.697 20:31:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:32:55.956 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:32:56.214 [2024-10-01 20:31:51.293873] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:56.214 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:32:56.214 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1b9c96ff-c6e3-4ca5-8a68-4be2db1941d4 -a 10.0.0.3 -s 4420 -i 4 00:32:56.214 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:32:56.214 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:32:56.214 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:32:56.214 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:32:56.214 20:31:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:32:58.741 20:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:32:58.741 20:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:32:58.741 20:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:32:58.741 20:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:32:58.741 20:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:32:58.741 20:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:32:58.741 20:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:32:58.741 20:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:32:58.741 20:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:32:58.741 20:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:32:58.741 20:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:32:58.741 20:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:32:58.741 20:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:32:58.741 [ 0]:0x1 00:32:58.741 20:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:32:58.741 20:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:32:58.741 20:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d3a6cbdca5a74124b4ff962b87322757 00:32:58.741 20:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d3a6cbdca5a74124b4ff962b87322757 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:32:58.741 20:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:32:58.741 20:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:32:58.741 20:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:32:58.741 20:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:32:58.741 [ 0]:0x1 00:32:58.741 20:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:32:58.741 20:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:32:58.741 20:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d3a6cbdca5a74124b4ff962b87322757 00:32:58.741 20:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d3a6cbdca5a74124b4ff962b87322757 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:32:58.741 20:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:32:58.741 20:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:32:58.741 20:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:32:58.741 [ 1]:0x2 00:32:58.741 20:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:32:58.741 20:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:32:58.999 20:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4fd1ae4c828948efaf13bd16cdfcbc76 00:32:58.999 20:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4fd1ae4c828948efaf13bd16cdfcbc76 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:32:58.999 20:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:32:58.999 20:31:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:58.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:58.999 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:59.257 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:32:59.515 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:32:59.515 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1b9c96ff-c6e3-4ca5-8a68-4be2db1941d4 -a 10.0.0.3 -s 4420 -i 4 00:32:59.817 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:32:59.817 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:32:59.817 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:32:59.817 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:32:59.817 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:32:59.817 20:31:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:33:01.719 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:33:01.719 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:33:01.719 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:33:01.719 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:33:01.719 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:33:01.719 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:33:01.719 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:33:01.719 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:33:01.719 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:33:01.719 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:33:01.719 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:33:01.719 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:33:01.719 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:33:01.719 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:33:01.719 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:01.719 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:33:01.719 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:01.719 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:33:01.719 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:33:01.719 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:33:01.719 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:33:01.719 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:33:01.719 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:33:01.719 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:33:01.719 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:33:01.719 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:01.719 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:01.719 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:01.719 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:33:01.719 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:33:01.719 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:33:01.719 [ 0]:0x2 00:33:01.719 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:33:01.719 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:33:01.978 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4fd1ae4c828948efaf13bd16cdfcbc76 00:33:01.978 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4fd1ae4c828948efaf13bd16cdfcbc76 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:33:01.978 20:31:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:33:02.236 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:33:02.236 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:33:02.236 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:33:02.236 [ 0]:0x1 00:33:02.236 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:33:02.236 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:33:02.236 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d3a6cbdca5a74124b4ff962b87322757 00:33:02.236 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d3a6cbdca5a74124b4ff962b87322757 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:33:02.236 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:33:02.236 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:33:02.236 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:33:02.236 [ 1]:0x2 00:33:02.236 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:33:02.236 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:33:02.494 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4fd1ae4c828948efaf13bd16cdfcbc76 00:33:02.494 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4fd1ae4c828948efaf13bd16cdfcbc76 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:33:02.494 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:33:02.753 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:33:02.753 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:33:02.753 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:33:02.753 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:33:02.753 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:02.753 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:33:02.753 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:02.753 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:33:02.753 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:33:02.753 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:33:02.753 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:33:02.753 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:33:02.753 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:33:02.753 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:33:02.753 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:33:02.753 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:02.753 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:02.753 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:02.753 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:33:02.753 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:33:02.753 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:33:02.753 [ 0]:0x2 00:33:02.753 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:33:02.753 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:33:02.753 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4fd1ae4c828948efaf13bd16cdfcbc76 00:33:02.753 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4fd1ae4c828948efaf13bd16cdfcbc76 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:33:02.753 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:33:02.753 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:02.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:02.753 20:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:33:03.320 20:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:33:03.320 20:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1b9c96ff-c6e3-4ca5-8a68-4be2db1941d4 -a 10.0.0.3 -s 4420 -i 4 00:33:03.320 20:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:33:03.320 20:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:33:03.320 20:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:33:03.320 20:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:33:03.320 20:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:33:03.320 20:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:33:05.223 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:33:05.223 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:33:05.223 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:33:05.223 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:33:05.223 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:33:05.223 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:33:05.223 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:33:05.223 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:33:05.480 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:33:05.480 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:33:05.480 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:33:05.480 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:33:05.480 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:33:05.480 [ 0]:0x1 00:33:05.481 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:33:05.481 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:33:05.481 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d3a6cbdca5a74124b4ff962b87322757 00:33:05.481 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d3a6cbdca5a74124b4ff962b87322757 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:33:05.481 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:33:05.481 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:33:05.481 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:33:05.481 [ 1]:0x2 00:33:05.481 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:33:05.481 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:33:05.481 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4fd1ae4c828948efaf13bd16cdfcbc76 00:33:05.481 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4fd1ae4c828948efaf13bd16cdfcbc76 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:33:05.481 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:33:05.739 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:33:05.739 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:33:05.739 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:33:05.739 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:33:05.739 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:05.739 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:33:05.739 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:05.739 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:33:05.739 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:33:05.739 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:33:05.739 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:33:05.739 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:33:05.997 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:33:05.997 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:33:05.998 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:33:05.998 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:05.998 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:05.998 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:05.998 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:33:05.998 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:33:05.998 20:32:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:33:05.998 [ 0]:0x2 00:33:05.998 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:33:05.998 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:33:05.998 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4fd1ae4c828948efaf13bd16cdfcbc76 00:33:05.998 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4fd1ae4c828948efaf13bd16cdfcbc76 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:33:05.998 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:33:05.998 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:33:05.998 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:33:05.998 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:05.998 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:05.998 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:05.998 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:05.998 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:05.998 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:05.998 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:05.998 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:33:05.998 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:33:06.257 [2024-10-01 20:32:01.317182] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:33:06.257 2024/10/01 20:32:01 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:33:06.257 request: 00:33:06.257 { 00:33:06.257 "method": "nvmf_ns_remove_host", 00:33:06.257 "params": { 00:33:06.257 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:06.257 "nsid": 2, 00:33:06.257 "host": "nqn.2016-06.io.spdk:host1" 00:33:06.257 } 00:33:06.257 } 00:33:06.257 Got JSON-RPC error response 00:33:06.257 GoRPCClient: error on JSON-RPC call 00:33:06.257 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:33:06.257 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:06.257 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:06.257 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:06.257 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:33:06.257 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:33:06.257 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:33:06.257 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:33:06.257 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:06.257 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:33:06.257 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:06.257 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:33:06.257 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:33:06.257 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:33:06.257 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:33:06.257 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:33:06.257 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:33:06.257 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:33:06.257 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:33:06.257 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:06.257 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:06.257 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:06.257 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:33:06.257 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:33:06.257 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:33:06.257 [ 0]:0x2 00:33:06.257 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:33:06.257 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:33:06.257 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4fd1ae4c828948efaf13bd16cdfcbc76 00:33:06.257 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4fd1ae4c828948efaf13bd16cdfcbc76 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:33:06.257 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:33:06.257 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:06.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:06.257 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:33:06.257 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=75909 00:33:06.257 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:33:06.258 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 75909 /var/tmp/host.sock 00:33:06.258 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 75909 ']' 00:33:06.258 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:33:06.258 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:06.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:33:06.258 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:33:06.258 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:06.516 20:32:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:33:06.516 [2024-10-01 20:32:01.591394] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:33:06.516 [2024-10-01 20:32:01.592074] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75909 ] 00:33:06.516 [2024-10-01 20:32:01.725436] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:06.774 [2024-10-01 20:32:01.795727] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:33:06.774 20:32:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:06.774 20:32:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:33:06.774 20:32:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:07.342 20:32:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:07.600 20:32:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 530951b7-4a9e-4e71-8a45-6989aa1a796c 00:33:07.600 20:32:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:33:07.600 20:32:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 530951B74A9E4E718A456989AA1A796C -i 00:33:07.858 20:32:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 2eda081c-f972-4a60-9d35-584cbee8e0b4 00:33:07.858 20:32:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:33:07.858 20:32:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 2EDA081CF9724A609D35584CBEE8E0B4 -i 00:33:08.116 20:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:33:08.375 20:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:33:08.942 20:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:33:08.942 20:32:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:33:09.200 nvme0n1 00:33:09.200 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:33:09.200 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:33:09.459 nvme1n2 00:33:09.459 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:33:09.459 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:33:09.459 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:33:09.459 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:33:09.459 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:33:10.041 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:33:10.041 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:33:10.041 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:33:10.041 20:32:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:33:10.300 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 530951b7-4a9e-4e71-8a45-6989aa1a796c == \5\3\0\9\5\1\b\7\-\4\a\9\e\-\4\e\7\1\-\8\a\4\5\-\6\9\8\9\a\a\1\a\7\9\6\c ]] 00:33:10.300 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:33:10.300 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:33:10.300 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:33:10.558 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 2eda081c-f972-4a60-9d35-584cbee8e0b4 == \2\e\d\a\0\8\1\c\-\f\9\7\2\-\4\a\6\0\-\9\d\3\5\-\5\8\4\c\b\e\e\8\e\0\b\4 ]] 00:33:10.558 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 75909 00:33:10.558 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 75909 ']' 00:33:10.558 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 75909 00:33:10.558 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:33:10.558 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:10.558 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75909 00:33:10.558 killing process with pid 75909 00:33:10.558 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:10.558 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:10.558 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75909' 00:33:10.558 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 75909 00:33:10.558 20:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 75909 00:33:10.817 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:11.384 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:33:11.384 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:33:11.384 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:11.384 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:33:11.384 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:11.384 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:33:11.384 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:11.384 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:11.384 rmmod nvme_tcp 00:33:11.384 rmmod nvme_fabrics 00:33:11.384 rmmod nvme_keyring 00:33:11.384 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:11.384 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:33:11.384 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:33:11.384 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@515 -- # '[' -n 75541 ']' 00:33:11.384 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # killprocess 75541 00:33:11.384 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 75541 ']' 00:33:11.384 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 75541 00:33:11.384 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:33:11.384 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:11.384 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75541 00:33:11.384 killing process with pid 75541 00:33:11.384 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:11.384 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:11.384 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75541' 00:33:11.384 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 75541 00:33:11.384 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 75541 00:33:11.643 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:11.643 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:11.643 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:11.643 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:33:11.643 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:11.643 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-restore 00:33:11.643 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-save 00:33:11.643 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:11.643 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:33:11.643 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:33:11.643 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:33:11.643 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:33:11.643 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:33:11.643 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:33:11.643 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:33:11.643 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:33:11.643 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:33:11.643 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:33:11.901 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:33:11.901 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:33:11.901 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:11.901 20:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:11.901 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@246 -- # remove_spdk_ns 00:33:11.901 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:11.901 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:11.901 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:11.901 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@300 -- # return 0 00:33:11.901 00:33:11.901 real 0m18.630s 00:33:11.901 user 0m29.996s 00:33:11.901 sys 0m2.857s 00:33:11.901 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:11.901 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:33:11.901 ************************************ 00:33:11.901 END TEST nvmf_ns_masking 00:33:11.901 ************************************ 00:33:11.901 20:32:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 0 -eq 1 ]] 00:33:11.901 20:32:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:33:11.901 20:32:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:33:11.901 20:32:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:11.901 20:32:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:11.901 20:32:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:33:11.901 ************************************ 00:33:11.901 START TEST nvmf_auth_target 00:33:11.901 ************************************ 00:33:11.901 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:33:12.161 * Looking for test storage... 00:33:12.161 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:12.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.161 --rc genhtml_branch_coverage=1 00:33:12.161 --rc genhtml_function_coverage=1 00:33:12.161 --rc genhtml_legend=1 00:33:12.161 --rc geninfo_all_blocks=1 00:33:12.161 --rc geninfo_unexecuted_blocks=1 00:33:12.161 00:33:12.161 ' 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:12.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.161 --rc genhtml_branch_coverage=1 00:33:12.161 --rc genhtml_function_coverage=1 00:33:12.161 --rc genhtml_legend=1 00:33:12.161 --rc geninfo_all_blocks=1 00:33:12.161 --rc geninfo_unexecuted_blocks=1 00:33:12.161 00:33:12.161 ' 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:12.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.161 --rc genhtml_branch_coverage=1 00:33:12.161 --rc genhtml_function_coverage=1 00:33:12.161 --rc genhtml_legend=1 00:33:12.161 --rc geninfo_all_blocks=1 00:33:12.161 --rc geninfo_unexecuted_blocks=1 00:33:12.161 00:33:12.161 ' 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:12.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.161 --rc genhtml_branch_coverage=1 00:33:12.161 --rc genhtml_function_coverage=1 00:33:12.161 --rc genhtml_legend=1 00:33:12.161 --rc geninfo_all_blocks=1 00:33:12.161 --rc geninfo_unexecuted_blocks=1 00:33:12.161 00:33:12.161 ' 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:12.161 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:12.162 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@458 -- # nvmf_veth_init 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:33:12.162 Cannot find device "nvmf_init_br" 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:33:12.162 Cannot find device "nvmf_init_br2" 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:33:12.162 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:33:12.163 Cannot find device "nvmf_tgt_br" 00:33:12.163 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:33:12.163 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:33:12.163 Cannot find device "nvmf_tgt_br2" 00:33:12.163 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:33:12.163 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:33:12.163 Cannot find device "nvmf_init_br" 00:33:12.163 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:33:12.163 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:33:12.163 Cannot find device "nvmf_init_br2" 00:33:12.163 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:33:12.163 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:33:12.163 Cannot find device "nvmf_tgt_br" 00:33:12.163 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:33:12.163 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:33:12.163 Cannot find device "nvmf_tgt_br2" 00:33:12.163 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:33:12.163 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:33:12.421 Cannot find device "nvmf_br" 00:33:12.421 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:33:12.421 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:33:12.421 Cannot find device "nvmf_init_if" 00:33:12.421 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:33:12.421 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:33:12.421 Cannot find device "nvmf_init_if2" 00:33:12.422 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:33:12.422 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:12.422 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:12.422 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:33:12.422 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:12.422 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:12.422 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:33:12.422 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:33:12.422 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:12.422 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:33:12.422 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:12.422 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:12.422 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:12.422 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:12.422 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:12.422 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:33:12.422 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:33:12.422 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:33:12.422 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:33:12.422 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:33:12.422 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:33:12.422 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:33:12.422 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:33:12.422 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:33:12.422 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:12.422 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:12.422 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:12.422 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:33:12.422 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:33:12.422 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:33:12.422 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:33:12.422 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:12.422 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:12.422 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:12.422 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:33:12.422 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:33:12.422 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:33:12.422 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:12.422 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:33:12.422 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:33:12.422 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:12.422 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:33:12.422 00:33:12.422 --- 10.0.0.3 ping statistics --- 00:33:12.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:12.422 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:33:12.422 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:33:12.422 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:33:12.422 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:33:12.422 00:33:12.422 --- 10.0.0.4 ping statistics --- 00:33:12.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:12.422 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:33:12.422 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:12.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:12.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:33:12.681 00:33:12.681 --- 10.0.0.1 ping statistics --- 00:33:12.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:12.681 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:33:12.681 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:33:12.681 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:12.681 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:33:12.681 00:33:12.681 --- 10.0.0.2 ping statistics --- 00:33:12.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:12.681 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:33:12.681 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:12.681 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # return 0 00:33:12.681 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:12.681 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:12.681 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:12.681 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:12.681 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:12.681 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:12.681 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:12.681 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:33:12.681 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:12.681 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:12.681 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:12.681 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=76322 00:33:12.681 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:33:12.681 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 76322 00:33:12.681 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 76322 ']' 00:33:12.681 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:12.681 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:12.681 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:12.681 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:12.681 20:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:12.940 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:12.940 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:33:12.940 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:12.940 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:12.940 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:12.940 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:12.940 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=76353 00:33:12.940 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:33:12.940 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:33:12.940 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:33:12.940 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:33:12.940 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:12.940 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:33:12.940 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:33:12.940 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:33:12.940 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:12.940 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=754deb2fb1fdd3ff86dc333001d300e2fd6044ec0142a6c0 00:33:12.940 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:33:12.940 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.YMB 00:33:12.940 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 754deb2fb1fdd3ff86dc333001d300e2fd6044ec0142a6c0 0 00:33:12.940 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 754deb2fb1fdd3ff86dc333001d300e2fd6044ec0142a6c0 0 00:33:12.940 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:33:12.940 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:12.940 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=754deb2fb1fdd3ff86dc333001d300e2fd6044ec0142a6c0 00:33:12.940 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:33:12.940 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.YMB 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.YMB 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.YMB 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=aefdc9f460e34dc26897f59bccaba00b7574b8cf6e1648071617bac604942810 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.UVv 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key aefdc9f460e34dc26897f59bccaba00b7574b8cf6e1648071617bac604942810 3 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 aefdc9f460e34dc26897f59bccaba00b7574b8cf6e1648071617bac604942810 3 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=aefdc9f460e34dc26897f59bccaba00b7574b8cf6e1648071617bac604942810 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.UVv 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.UVv 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.UVv 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=35d31489ca37ceda508a7b6d53761653 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.TCX 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 35d31489ca37ceda508a7b6d53761653 1 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 35d31489ca37ceda508a7b6d53761653 1 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=35d31489ca37ceda508a7b6d53761653 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.TCX 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.TCX 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.TCX 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=386a5b48a3d9ee2ee477b44bab902ace6dff7c360efaa85a 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.4mg 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 386a5b48a3d9ee2ee477b44bab902ace6dff7c360efaa85a 2 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 386a5b48a3d9ee2ee477b44bab902ace6dff7c360efaa85a 2 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=386a5b48a3d9ee2ee477b44bab902ace6dff7c360efaa85a 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.4mg 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.4mg 00:33:13.200 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.4mg 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=190ea435164614c7979c9af02d4efb59c14ec403df274a5b 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.rxZ 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 190ea435164614c7979c9af02d4efb59c14ec403df274a5b 2 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 190ea435164614c7979c9af02d4efb59c14ec403df274a5b 2 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=190ea435164614c7979c9af02d4efb59c14ec403df274a5b 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.rxZ 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.rxZ 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.rxZ 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=64d40fe7850803524423d0de009ce47b 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.C61 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 64d40fe7850803524423d0de009ce47b 1 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 64d40fe7850803524423d0de009ce47b 1 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=64d40fe7850803524423d0de009ce47b 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.C61 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.C61 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.C61 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=a04b2649b7c615b650ec09a376e740c20f619745d087379b3f785722fd235035 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.rei 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key a04b2649b7c615b650ec09a376e740c20f619745d087379b3f785722fd235035 3 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 a04b2649b7c615b650ec09a376e740c20f619745d087379b3f785722fd235035 3 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=a04b2649b7c615b650ec09a376e740c20f619745d087379b3f785722fd235035 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.rei 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.rei 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.rei 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 76322 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 76322 ']' 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:13.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:13.460 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:13.719 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:13.719 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:33:13.719 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 76353 /var/tmp/host.sock 00:33:13.719 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 76353 ']' 00:33:13.719 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:33:13.719 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:13.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:33:13.719 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:33:13.719 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:13.719 20:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:14.284 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:14.284 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:33:14.284 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:33:14.284 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.284 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:14.284 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.284 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:33:14.284 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.YMB 00:33:14.284 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.284 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:14.284 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.284 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.YMB 00:33:14.284 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.YMB 00:33:14.542 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.UVv ]] 00:33:14.542 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.UVv 00:33:14.542 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.543 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:14.543 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.543 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.UVv 00:33:14.543 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.UVv 00:33:14.801 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:33:14.801 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.TCX 00:33:14.801 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.801 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:14.801 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.801 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.TCX 00:33:14.801 20:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.TCX 00:33:15.060 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.4mg ]] 00:33:15.060 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4mg 00:33:15.060 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.060 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:15.060 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.060 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4mg 00:33:15.060 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4mg 00:33:15.318 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:33:15.319 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.rxZ 00:33:15.319 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.319 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:15.319 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.319 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.rxZ 00:33:15.319 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.rxZ 00:33:15.577 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.C61 ]] 00:33:15.577 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.C61 00:33:15.577 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.577 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:15.836 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.836 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.C61 00:33:15.836 20:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.C61 00:33:16.097 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:33:16.097 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.rei 00:33:16.098 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.098 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:16.098 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.098 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.rei 00:33:16.098 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.rei 00:33:16.367 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:33:16.367 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:33:16.367 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:33:16.367 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:16.367 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:33:16.367 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:33:16.626 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:33:16.626 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:16.626 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:33:16.626 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:33:16.626 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:33:16.626 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:16.626 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:16.626 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.626 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:16.626 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.626 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:16.626 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:16.626 20:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:17.193 00:33:17.193 20:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:17.193 20:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:17.193 20:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:17.451 20:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:17.451 20:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:17.451 20:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.451 20:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:17.451 20:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.451 20:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:17.451 { 00:33:17.451 "auth": { 00:33:17.451 "dhgroup": "null", 00:33:17.451 "digest": "sha256", 00:33:17.451 "state": "completed" 00:33:17.451 }, 00:33:17.451 "cntlid": 1, 00:33:17.451 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:33:17.451 "listen_address": { 00:33:17.451 "adrfam": "IPv4", 00:33:17.451 "traddr": "10.0.0.3", 00:33:17.451 "trsvcid": "4420", 00:33:17.451 "trtype": "TCP" 00:33:17.451 }, 00:33:17.451 "peer_address": { 00:33:17.451 "adrfam": "IPv4", 00:33:17.451 "traddr": "10.0.0.1", 00:33:17.451 "trsvcid": "58642", 00:33:17.451 "trtype": "TCP" 00:33:17.451 }, 00:33:17.451 "qid": 0, 00:33:17.451 "state": "enabled", 00:33:17.451 "thread": "nvmf_tgt_poll_group_000" 00:33:17.451 } 00:33:17.451 ]' 00:33:17.451 20:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:17.451 20:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:33:17.451 20:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:17.451 20:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:33:17.451 20:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:17.451 20:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:17.452 20:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:17.452 20:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:18.019 20:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzU0ZGViMmZiMWZkZDNmZjg2ZGMzMzMwMDFkMzAwZTJmZDYwNDRlYzAxNDJhNmMwV9F6jg==: --dhchap-ctrl-secret DHHC-1:03:YWVmZGM5ZjQ2MGUzNGRjMjY4OTdmNTliY2NhYmEwMGI3NTc0YjhjZjZlMTY0ODA3MTYxN2JhYzYwNDk0MjgxMJS15DY=: 00:33:18.019 20:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:00:NzU0ZGViMmZiMWZkZDNmZjg2ZGMzMzMwMDFkMzAwZTJmZDYwNDRlYzAxNDJhNmMwV9F6jg==: --dhchap-ctrl-secret DHHC-1:03:YWVmZGM5ZjQ2MGUzNGRjMjY4OTdmNTliY2NhYmEwMGI3NTc0YjhjZjZlMTY0ODA3MTYxN2JhYzYwNDk0MjgxMJS15DY=: 00:33:23.287 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:23.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:23.287 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:33:23.287 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.287 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:23.287 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.287 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:23.287 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:33:23.287 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:33:23.287 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:33:23.287 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:23.287 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:33:23.287 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:33:23.287 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:33:23.287 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:23.287 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:23.287 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.287 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:23.287 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.287 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:23.287 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:23.287 20:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:23.287 00:33:23.287 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:23.287 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:23.287 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:23.546 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:23.546 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:23.546 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.546 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:23.546 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.546 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:23.546 { 00:33:23.546 "auth": { 00:33:23.546 "dhgroup": "null", 00:33:23.546 "digest": "sha256", 00:33:23.546 "state": "completed" 00:33:23.546 }, 00:33:23.546 "cntlid": 3, 00:33:23.546 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:33:23.546 "listen_address": { 00:33:23.546 "adrfam": "IPv4", 00:33:23.546 "traddr": "10.0.0.3", 00:33:23.546 "trsvcid": "4420", 00:33:23.546 "trtype": "TCP" 00:33:23.546 }, 00:33:23.546 "peer_address": { 00:33:23.546 "adrfam": "IPv4", 00:33:23.546 "traddr": "10.0.0.1", 00:33:23.546 "trsvcid": "52508", 00:33:23.546 "trtype": "TCP" 00:33:23.546 }, 00:33:23.546 "qid": 0, 00:33:23.546 "state": "enabled", 00:33:23.546 "thread": "nvmf_tgt_poll_group_000" 00:33:23.546 } 00:33:23.546 ]' 00:33:23.546 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:23.546 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:33:23.546 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:23.546 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:33:23.546 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:23.546 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:23.546 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:23.546 20:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:23.804 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzVkMzE0ODljYTM3Y2VkYTUwOGE3YjZkNTM3NjE2NTMvSQcC: --dhchap-ctrl-secret DHHC-1:02:Mzg2YTViNDhhM2Q5ZWUyZWU0NzdiNDRiYWI5MDJhY2U2ZGZmN2MzNjBlZmFhODVhjx41NQ==: 00:33:23.804 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:01:MzVkMzE0ODljYTM3Y2VkYTUwOGE3YjZkNTM3NjE2NTMvSQcC: --dhchap-ctrl-secret DHHC-1:02:Mzg2YTViNDhhM2Q5ZWUyZWU0NzdiNDRiYWI5MDJhY2U2ZGZmN2MzNjBlZmFhODVhjx41NQ==: 00:33:24.739 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:24.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:24.739 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:33:24.739 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.739 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:24.739 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.739 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:24.739 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:33:24.739 20:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:33:24.998 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:33:24.998 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:24.998 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:33:24.998 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:33:24.998 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:33:24.998 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:24.998 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:24.998 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.998 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:24.998 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.998 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:24.998 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:24.998 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:25.255 00:33:25.256 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:25.256 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:25.256 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:25.822 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:25.822 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:25.822 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.822 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:25.822 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.822 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:25.822 { 00:33:25.822 "auth": { 00:33:25.822 "dhgroup": "null", 00:33:25.822 "digest": "sha256", 00:33:25.822 "state": "completed" 00:33:25.822 }, 00:33:25.822 "cntlid": 5, 00:33:25.822 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:33:25.822 "listen_address": { 00:33:25.822 "adrfam": "IPv4", 00:33:25.822 "traddr": "10.0.0.3", 00:33:25.822 "trsvcid": "4420", 00:33:25.822 "trtype": "TCP" 00:33:25.822 }, 00:33:25.822 "peer_address": { 00:33:25.822 "adrfam": "IPv4", 00:33:25.822 "traddr": "10.0.0.1", 00:33:25.822 "trsvcid": "52544", 00:33:25.822 "trtype": "TCP" 00:33:25.822 }, 00:33:25.822 "qid": 0, 00:33:25.822 "state": "enabled", 00:33:25.822 "thread": "nvmf_tgt_poll_group_000" 00:33:25.822 } 00:33:25.822 ]' 00:33:25.822 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:25.822 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:33:25.822 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:25.822 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:33:25.822 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:25.822 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:25.822 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:25.822 20:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:26.081 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: --dhchap-ctrl-secret DHHC-1:01:NjRkNDBmZTc4NTA4MDM1MjQ0MjNkMGRlMDA5Y2U0N2JVcZES: 00:33:26.081 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: --dhchap-ctrl-secret DHHC-1:01:NjRkNDBmZTc4NTA4MDM1MjQ0MjNkMGRlMDA5Y2U0N2JVcZES: 00:33:27.016 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:27.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:27.016 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:33:27.016 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.016 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:27.016 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.016 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:27.016 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:33:27.016 20:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:33:27.016 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:33:27.016 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:27.016 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:33:27.016 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:33:27.016 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:33:27.016 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:27.016 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key3 00:33:27.016 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.016 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:27.275 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.275 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:33:27.275 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:33:27.275 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:33:27.534 00:33:27.534 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:27.534 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:27.534 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:27.792 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:27.792 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:27.792 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.792 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:27.792 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.792 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:27.792 { 00:33:27.792 "auth": { 00:33:27.792 "dhgroup": "null", 00:33:27.792 "digest": "sha256", 00:33:27.792 "state": "completed" 00:33:27.792 }, 00:33:27.792 "cntlid": 7, 00:33:27.792 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:33:27.792 "listen_address": { 00:33:27.792 "adrfam": "IPv4", 00:33:27.792 "traddr": "10.0.0.3", 00:33:27.792 "trsvcid": "4420", 00:33:27.792 "trtype": "TCP" 00:33:27.792 }, 00:33:27.792 "peer_address": { 00:33:27.792 "adrfam": "IPv4", 00:33:27.792 "traddr": "10.0.0.1", 00:33:27.792 "trsvcid": "52558", 00:33:27.792 "trtype": "TCP" 00:33:27.792 }, 00:33:27.792 "qid": 0, 00:33:27.792 "state": "enabled", 00:33:27.792 "thread": "nvmf_tgt_poll_group_000" 00:33:27.792 } 00:33:27.792 ]' 00:33:27.792 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:27.792 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:33:27.792 20:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:27.792 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:33:27.792 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:28.050 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:28.050 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:28.050 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:28.309 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTA0YjI2NDliN2M2MTViNjUwZWMwOWEzNzZlNzQwYzIwZjYxOTc0NWQwODczNzliM2Y3ODU3MjJmZDIzNTAzNd3SHfc=: 00:33:28.309 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:03:YTA0YjI2NDliN2M2MTViNjUwZWMwOWEzNzZlNzQwYzIwZjYxOTc0NWQwODczNzliM2Y3ODU3MjJmZDIzNTAzNd3SHfc=: 00:33:28.876 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:28.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:28.876 20:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:33:28.876 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.876 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:28.876 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.876 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:33:28.876 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:28.876 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:28.876 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:29.134 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:33:29.134 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:29.134 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:33:29.134 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:33:29.134 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:33:29.134 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:29.134 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:29.134 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.134 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:29.134 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.134 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:29.134 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:29.134 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:29.702 00:33:29.702 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:29.702 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:29.702 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:29.960 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:29.960 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:29.960 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.960 20:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:29.960 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.960 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:29.960 { 00:33:29.960 "auth": { 00:33:29.960 "dhgroup": "ffdhe2048", 00:33:29.960 "digest": "sha256", 00:33:29.960 "state": "completed" 00:33:29.960 }, 00:33:29.960 "cntlid": 9, 00:33:29.960 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:33:29.960 "listen_address": { 00:33:29.960 "adrfam": "IPv4", 00:33:29.960 "traddr": "10.0.0.3", 00:33:29.960 "trsvcid": "4420", 00:33:29.960 "trtype": "TCP" 00:33:29.960 }, 00:33:29.960 "peer_address": { 00:33:29.960 "adrfam": "IPv4", 00:33:29.960 "traddr": "10.0.0.1", 00:33:29.960 "trsvcid": "52582", 00:33:29.960 "trtype": "TCP" 00:33:29.960 }, 00:33:29.960 "qid": 0, 00:33:29.960 "state": "enabled", 00:33:29.960 "thread": "nvmf_tgt_poll_group_000" 00:33:29.960 } 00:33:29.960 ]' 00:33:29.960 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:29.960 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:33:29.960 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:29.960 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:33:29.960 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:29.960 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:29.960 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:29.960 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:30.227 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzU0ZGViMmZiMWZkZDNmZjg2ZGMzMzMwMDFkMzAwZTJmZDYwNDRlYzAxNDJhNmMwV9F6jg==: --dhchap-ctrl-secret DHHC-1:03:YWVmZGM5ZjQ2MGUzNGRjMjY4OTdmNTliY2NhYmEwMGI3NTc0YjhjZjZlMTY0ODA3MTYxN2JhYzYwNDk0MjgxMJS15DY=: 00:33:30.227 20:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:00:NzU0ZGViMmZiMWZkZDNmZjg2ZGMzMzMwMDFkMzAwZTJmZDYwNDRlYzAxNDJhNmMwV9F6jg==: --dhchap-ctrl-secret DHHC-1:03:YWVmZGM5ZjQ2MGUzNGRjMjY4OTdmNTliY2NhYmEwMGI3NTc0YjhjZjZlMTY0ODA3MTYxN2JhYzYwNDk0MjgxMJS15DY=: 00:33:31.181 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:31.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:31.181 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:33:31.181 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.181 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:31.181 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.181 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:31.181 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:31.181 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:31.439 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:33:31.439 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:31.439 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:33:31.439 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:33:31.439 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:33:31.439 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:31.439 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:31.439 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.439 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:31.439 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.439 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:31.439 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:31.439 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:31.697 00:33:31.697 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:31.697 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:31.697 20:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:31.955 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:31.955 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:31.955 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.955 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:31.955 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.955 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:31.955 { 00:33:31.955 "auth": { 00:33:31.955 "dhgroup": "ffdhe2048", 00:33:31.955 "digest": "sha256", 00:33:31.955 "state": "completed" 00:33:31.955 }, 00:33:31.955 "cntlid": 11, 00:33:31.955 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:33:31.955 "listen_address": { 00:33:31.955 "adrfam": "IPv4", 00:33:31.955 "traddr": "10.0.0.3", 00:33:31.955 "trsvcid": "4420", 00:33:31.955 "trtype": "TCP" 00:33:31.955 }, 00:33:31.955 "peer_address": { 00:33:31.955 "adrfam": "IPv4", 00:33:31.955 "traddr": "10.0.0.1", 00:33:31.955 "trsvcid": "52606", 00:33:31.955 "trtype": "TCP" 00:33:31.955 }, 00:33:31.955 "qid": 0, 00:33:31.955 "state": "enabled", 00:33:31.955 "thread": "nvmf_tgt_poll_group_000" 00:33:31.955 } 00:33:31.955 ]' 00:33:31.955 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:32.212 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:33:32.212 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:32.212 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:33:32.212 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:32.212 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:32.212 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:32.212 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:32.471 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzVkMzE0ODljYTM3Y2VkYTUwOGE3YjZkNTM3NjE2NTMvSQcC: --dhchap-ctrl-secret DHHC-1:02:Mzg2YTViNDhhM2Q5ZWUyZWU0NzdiNDRiYWI5MDJhY2U2ZGZmN2MzNjBlZmFhODVhjx41NQ==: 00:33:32.471 20:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:01:MzVkMzE0ODljYTM3Y2VkYTUwOGE3YjZkNTM3NjE2NTMvSQcC: --dhchap-ctrl-secret DHHC-1:02:Mzg2YTViNDhhM2Q5ZWUyZWU0NzdiNDRiYWI5MDJhY2U2ZGZmN2MzNjBlZmFhODVhjx41NQ==: 00:33:33.038 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:33.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:33.038 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:33:33.038 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.038 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:33.296 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.296 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:33.296 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:33.296 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:33.554 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:33:33.554 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:33.554 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:33:33.554 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:33:33.554 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:33:33.554 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:33.554 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:33.554 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.554 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:33.554 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.555 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:33.555 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:33.555 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:33.813 00:33:33.813 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:33.813 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:33.813 20:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:34.070 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:34.070 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:34.070 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.070 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:34.070 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.070 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:34.070 { 00:33:34.070 "auth": { 00:33:34.070 "dhgroup": "ffdhe2048", 00:33:34.070 "digest": "sha256", 00:33:34.070 "state": "completed" 00:33:34.070 }, 00:33:34.070 "cntlid": 13, 00:33:34.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:33:34.070 "listen_address": { 00:33:34.070 "adrfam": "IPv4", 00:33:34.070 "traddr": "10.0.0.3", 00:33:34.070 "trsvcid": "4420", 00:33:34.070 "trtype": "TCP" 00:33:34.070 }, 00:33:34.070 "peer_address": { 00:33:34.070 "adrfam": "IPv4", 00:33:34.070 "traddr": "10.0.0.1", 00:33:34.070 "trsvcid": "48520", 00:33:34.070 "trtype": "TCP" 00:33:34.070 }, 00:33:34.070 "qid": 0, 00:33:34.070 "state": "enabled", 00:33:34.070 "thread": "nvmf_tgt_poll_group_000" 00:33:34.070 } 00:33:34.070 ]' 00:33:34.070 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:34.327 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:33:34.327 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:34.327 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:33:34.327 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:34.327 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:34.327 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:34.327 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:34.585 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: --dhchap-ctrl-secret DHHC-1:01:NjRkNDBmZTc4NTA4MDM1MjQ0MjNkMGRlMDA5Y2U0N2JVcZES: 00:33:34.585 20:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: --dhchap-ctrl-secret DHHC-1:01:NjRkNDBmZTc4NTA4MDM1MjQ0MjNkMGRlMDA5Y2U0N2JVcZES: 00:33:35.521 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:35.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:35.521 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:33:35.521 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.521 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:35.521 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.521 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:35.521 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:35.521 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:35.521 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:33:35.521 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:35.521 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:33:35.521 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:33:35.521 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:33:35.521 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:35.521 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key3 00:33:35.521 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.521 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:35.521 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.521 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:33:35.521 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:33:35.521 20:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:33:36.088 00:33:36.088 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:36.088 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:36.088 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:36.346 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:36.346 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:36.346 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.347 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:36.347 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.347 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:36.347 { 00:33:36.347 "auth": { 00:33:36.347 "dhgroup": "ffdhe2048", 00:33:36.347 "digest": "sha256", 00:33:36.347 "state": "completed" 00:33:36.347 }, 00:33:36.347 "cntlid": 15, 00:33:36.347 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:33:36.347 "listen_address": { 00:33:36.347 "adrfam": "IPv4", 00:33:36.347 "traddr": "10.0.0.3", 00:33:36.347 "trsvcid": "4420", 00:33:36.347 "trtype": "TCP" 00:33:36.347 }, 00:33:36.347 "peer_address": { 00:33:36.347 "adrfam": "IPv4", 00:33:36.347 "traddr": "10.0.0.1", 00:33:36.347 "trsvcid": "48542", 00:33:36.347 "trtype": "TCP" 00:33:36.347 }, 00:33:36.347 "qid": 0, 00:33:36.347 "state": "enabled", 00:33:36.347 "thread": "nvmf_tgt_poll_group_000" 00:33:36.347 } 00:33:36.347 ]' 00:33:36.347 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:36.347 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:33:36.347 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:36.347 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:33:36.347 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:36.347 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:36.347 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:36.347 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:36.913 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTA0YjI2NDliN2M2MTViNjUwZWMwOWEzNzZlNzQwYzIwZjYxOTc0NWQwODczNzliM2Y3ODU3MjJmZDIzNTAzNd3SHfc=: 00:33:36.913 20:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:03:YTA0YjI2NDliN2M2MTViNjUwZWMwOWEzNzZlNzQwYzIwZjYxOTc0NWQwODczNzliM2Y3ODU3MjJmZDIzNTAzNd3SHfc=: 00:33:37.480 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:37.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:37.481 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:33:37.481 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.481 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:37.481 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.481 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:33:37.481 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:37.481 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:37.481 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:37.740 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:33:37.740 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:37.740 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:33:37.740 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:33:37.740 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:33:37.740 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:37.740 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:37.740 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.740 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:37.740 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.740 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:37.740 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:37.740 20:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:37.998 00:33:37.998 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:37.998 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:37.998 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:38.257 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:38.257 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:38.257 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.257 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:38.515 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.515 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:38.515 { 00:33:38.515 "auth": { 00:33:38.515 "dhgroup": "ffdhe3072", 00:33:38.515 "digest": "sha256", 00:33:38.515 "state": "completed" 00:33:38.515 }, 00:33:38.515 "cntlid": 17, 00:33:38.515 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:33:38.515 "listen_address": { 00:33:38.515 "adrfam": "IPv4", 00:33:38.516 "traddr": "10.0.0.3", 00:33:38.516 "trsvcid": "4420", 00:33:38.516 "trtype": "TCP" 00:33:38.516 }, 00:33:38.516 "peer_address": { 00:33:38.516 "adrfam": "IPv4", 00:33:38.516 "traddr": "10.0.0.1", 00:33:38.516 "trsvcid": "48564", 00:33:38.516 "trtype": "TCP" 00:33:38.516 }, 00:33:38.516 "qid": 0, 00:33:38.516 "state": "enabled", 00:33:38.516 "thread": "nvmf_tgt_poll_group_000" 00:33:38.516 } 00:33:38.516 ]' 00:33:38.516 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:38.516 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:33:38.516 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:38.516 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:33:38.516 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:38.516 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:38.516 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:38.516 20:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:39.082 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzU0ZGViMmZiMWZkZDNmZjg2ZGMzMzMwMDFkMzAwZTJmZDYwNDRlYzAxNDJhNmMwV9F6jg==: --dhchap-ctrl-secret DHHC-1:03:YWVmZGM5ZjQ2MGUzNGRjMjY4OTdmNTliY2NhYmEwMGI3NTc0YjhjZjZlMTY0ODA3MTYxN2JhYzYwNDk0MjgxMJS15DY=: 00:33:39.082 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:00:NzU0ZGViMmZiMWZkZDNmZjg2ZGMzMzMwMDFkMzAwZTJmZDYwNDRlYzAxNDJhNmMwV9F6jg==: --dhchap-ctrl-secret DHHC-1:03:YWVmZGM5ZjQ2MGUzNGRjMjY4OTdmNTliY2NhYmEwMGI3NTc0YjhjZjZlMTY0ODA3MTYxN2JhYzYwNDk0MjgxMJS15DY=: 00:33:39.649 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:39.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:39.649 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:33:39.649 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.649 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:39.649 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.649 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:39.649 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:39.649 20:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:39.907 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:33:39.907 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:39.907 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:33:39.907 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:33:39.907 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:33:39.907 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:39.907 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:39.907 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.907 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:39.907 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.907 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:39.907 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:39.907 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:40.166 00:33:40.166 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:40.166 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:40.166 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:40.731 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:40.731 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:40.731 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.731 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:40.731 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.731 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:40.731 { 00:33:40.731 "auth": { 00:33:40.731 "dhgroup": "ffdhe3072", 00:33:40.731 "digest": "sha256", 00:33:40.731 "state": "completed" 00:33:40.731 }, 00:33:40.731 "cntlid": 19, 00:33:40.731 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:33:40.731 "listen_address": { 00:33:40.731 "adrfam": "IPv4", 00:33:40.731 "traddr": "10.0.0.3", 00:33:40.731 "trsvcid": "4420", 00:33:40.731 "trtype": "TCP" 00:33:40.731 }, 00:33:40.731 "peer_address": { 00:33:40.731 "adrfam": "IPv4", 00:33:40.731 "traddr": "10.0.0.1", 00:33:40.731 "trsvcid": "48602", 00:33:40.731 "trtype": "TCP" 00:33:40.731 }, 00:33:40.731 "qid": 0, 00:33:40.731 "state": "enabled", 00:33:40.731 "thread": "nvmf_tgt_poll_group_000" 00:33:40.731 } 00:33:40.731 ]' 00:33:40.731 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:40.731 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:33:40.731 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:40.731 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:33:40.731 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:40.731 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:40.731 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:40.731 20:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:40.988 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzVkMzE0ODljYTM3Y2VkYTUwOGE3YjZkNTM3NjE2NTMvSQcC: --dhchap-ctrl-secret DHHC-1:02:Mzg2YTViNDhhM2Q5ZWUyZWU0NzdiNDRiYWI5MDJhY2U2ZGZmN2MzNjBlZmFhODVhjx41NQ==: 00:33:40.988 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:01:MzVkMzE0ODljYTM3Y2VkYTUwOGE3YjZkNTM3NjE2NTMvSQcC: --dhchap-ctrl-secret DHHC-1:02:Mzg2YTViNDhhM2Q5ZWUyZWU0NzdiNDRiYWI5MDJhY2U2ZGZmN2MzNjBlZmFhODVhjx41NQ==: 00:33:41.921 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:41.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:41.921 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:33:41.921 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.921 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:41.921 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.921 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:41.921 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:41.921 20:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:42.178 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:33:42.178 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:42.178 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:33:42.178 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:33:42.178 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:33:42.178 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:42.178 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:42.178 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.178 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:42.178 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.178 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:42.178 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:42.178 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:42.435 00:33:42.693 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:42.693 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:42.693 20:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:42.950 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:42.950 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:42.950 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.950 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:42.950 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.950 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:42.950 { 00:33:42.950 "auth": { 00:33:42.950 "dhgroup": "ffdhe3072", 00:33:42.950 "digest": "sha256", 00:33:42.950 "state": "completed" 00:33:42.950 }, 00:33:42.950 "cntlid": 21, 00:33:42.950 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:33:42.950 "listen_address": { 00:33:42.950 "adrfam": "IPv4", 00:33:42.950 "traddr": "10.0.0.3", 00:33:42.950 "trsvcid": "4420", 00:33:42.950 "trtype": "TCP" 00:33:42.950 }, 00:33:42.950 "peer_address": { 00:33:42.950 "adrfam": "IPv4", 00:33:42.950 "traddr": "10.0.0.1", 00:33:42.950 "trsvcid": "48624", 00:33:42.950 "trtype": "TCP" 00:33:42.950 }, 00:33:42.950 "qid": 0, 00:33:42.950 "state": "enabled", 00:33:42.950 "thread": "nvmf_tgt_poll_group_000" 00:33:42.950 } 00:33:42.950 ]' 00:33:42.950 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:42.950 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:33:42.950 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:42.950 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:33:42.950 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:43.209 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:43.209 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:43.209 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:43.468 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: --dhchap-ctrl-secret DHHC-1:01:NjRkNDBmZTc4NTA4MDM1MjQ0MjNkMGRlMDA5Y2U0N2JVcZES: 00:33:43.468 20:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: --dhchap-ctrl-secret DHHC-1:01:NjRkNDBmZTc4NTA4MDM1MjQ0MjNkMGRlMDA5Y2U0N2JVcZES: 00:33:44.033 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:44.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:44.033 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:33:44.033 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.033 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:44.033 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.033 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:44.033 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:44.033 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:44.599 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:33:44.599 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:44.599 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:33:44.599 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:33:44.599 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:33:44.599 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:44.599 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key3 00:33:44.599 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.599 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:44.599 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.599 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:33:44.599 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:33:44.599 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:33:44.858 00:33:44.858 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:44.858 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:44.858 20:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:45.116 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:45.116 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:45.117 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.117 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:45.117 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.117 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:45.117 { 00:33:45.117 "auth": { 00:33:45.117 "dhgroup": "ffdhe3072", 00:33:45.117 "digest": "sha256", 00:33:45.117 "state": "completed" 00:33:45.117 }, 00:33:45.117 "cntlid": 23, 00:33:45.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:33:45.117 "listen_address": { 00:33:45.117 "adrfam": "IPv4", 00:33:45.117 "traddr": "10.0.0.3", 00:33:45.117 "trsvcid": "4420", 00:33:45.117 "trtype": "TCP" 00:33:45.117 }, 00:33:45.117 "peer_address": { 00:33:45.117 "adrfam": "IPv4", 00:33:45.117 "traddr": "10.0.0.1", 00:33:45.117 "trsvcid": "41412", 00:33:45.117 "trtype": "TCP" 00:33:45.117 }, 00:33:45.117 "qid": 0, 00:33:45.117 "state": "enabled", 00:33:45.117 "thread": "nvmf_tgt_poll_group_000" 00:33:45.117 } 00:33:45.117 ]' 00:33:45.117 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:45.117 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:33:45.117 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:45.117 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:33:45.117 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:45.376 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:45.376 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:45.376 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:45.635 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTA0YjI2NDliN2M2MTViNjUwZWMwOWEzNzZlNzQwYzIwZjYxOTc0NWQwODczNzliM2Y3ODU3MjJmZDIzNTAzNd3SHfc=: 00:33:45.635 20:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:03:YTA0YjI2NDliN2M2MTViNjUwZWMwOWEzNzZlNzQwYzIwZjYxOTc0NWQwODczNzliM2Y3ODU3MjJmZDIzNTAzNd3SHfc=: 00:33:46.202 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:46.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:46.202 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:33:46.202 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.202 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:46.202 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.202 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:33:46.202 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:46.202 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:46.202 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:46.461 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:33:46.461 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:46.461 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:33:46.461 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:33:46.461 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:33:46.461 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:46.461 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:46.461 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.461 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:46.461 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.461 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:46.461 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:46.461 20:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:47.033 00:33:47.033 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:47.033 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:47.033 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:47.292 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:47.292 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:47.292 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.292 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:47.292 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.292 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:47.292 { 00:33:47.292 "auth": { 00:33:47.292 "dhgroup": "ffdhe4096", 00:33:47.292 "digest": "sha256", 00:33:47.292 "state": "completed" 00:33:47.292 }, 00:33:47.292 "cntlid": 25, 00:33:47.292 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:33:47.292 "listen_address": { 00:33:47.292 "adrfam": "IPv4", 00:33:47.292 "traddr": "10.0.0.3", 00:33:47.292 "trsvcid": "4420", 00:33:47.292 "trtype": "TCP" 00:33:47.292 }, 00:33:47.292 "peer_address": { 00:33:47.292 "adrfam": "IPv4", 00:33:47.292 "traddr": "10.0.0.1", 00:33:47.292 "trsvcid": "41424", 00:33:47.292 "trtype": "TCP" 00:33:47.292 }, 00:33:47.292 "qid": 0, 00:33:47.292 "state": "enabled", 00:33:47.292 "thread": "nvmf_tgt_poll_group_000" 00:33:47.292 } 00:33:47.292 ]' 00:33:47.292 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:47.292 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:33:47.292 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:47.551 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:33:47.551 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:47.551 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:47.551 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:47.551 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:47.809 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzU0ZGViMmZiMWZkZDNmZjg2ZGMzMzMwMDFkMzAwZTJmZDYwNDRlYzAxNDJhNmMwV9F6jg==: --dhchap-ctrl-secret DHHC-1:03:YWVmZGM5ZjQ2MGUzNGRjMjY4OTdmNTliY2NhYmEwMGI3NTc0YjhjZjZlMTY0ODA3MTYxN2JhYzYwNDk0MjgxMJS15DY=: 00:33:47.809 20:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:00:NzU0ZGViMmZiMWZkZDNmZjg2ZGMzMzMwMDFkMzAwZTJmZDYwNDRlYzAxNDJhNmMwV9F6jg==: --dhchap-ctrl-secret DHHC-1:03:YWVmZGM5ZjQ2MGUzNGRjMjY4OTdmNTliY2NhYmEwMGI3NTc0YjhjZjZlMTY0ODA3MTYxN2JhYzYwNDk0MjgxMJS15DY=: 00:33:48.376 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:48.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:48.376 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:33:48.376 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.376 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:48.376 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.376 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:48.376 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:48.376 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:48.635 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:33:48.635 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:48.635 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:33:48.635 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:33:48.635 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:33:48.635 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:48.635 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:48.635 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.635 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:48.635 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.635 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:48.635 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:48.635 20:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:49.202 00:33:49.203 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:49.203 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:49.203 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:49.461 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:49.461 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:49.462 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.462 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:49.462 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.462 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:49.462 { 00:33:49.462 "auth": { 00:33:49.462 "dhgroup": "ffdhe4096", 00:33:49.462 "digest": "sha256", 00:33:49.462 "state": "completed" 00:33:49.462 }, 00:33:49.462 "cntlid": 27, 00:33:49.462 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:33:49.462 "listen_address": { 00:33:49.462 "adrfam": "IPv4", 00:33:49.462 "traddr": "10.0.0.3", 00:33:49.462 "trsvcid": "4420", 00:33:49.462 "trtype": "TCP" 00:33:49.462 }, 00:33:49.462 "peer_address": { 00:33:49.462 "adrfam": "IPv4", 00:33:49.462 "traddr": "10.0.0.1", 00:33:49.462 "trsvcid": "41446", 00:33:49.462 "trtype": "TCP" 00:33:49.462 }, 00:33:49.462 "qid": 0, 00:33:49.462 "state": "enabled", 00:33:49.462 "thread": "nvmf_tgt_poll_group_000" 00:33:49.462 } 00:33:49.462 ]' 00:33:49.462 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:49.462 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:33:49.462 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:49.720 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:33:49.720 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:49.720 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:49.720 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:49.720 20:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:49.979 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzVkMzE0ODljYTM3Y2VkYTUwOGE3YjZkNTM3NjE2NTMvSQcC: --dhchap-ctrl-secret DHHC-1:02:Mzg2YTViNDhhM2Q5ZWUyZWU0NzdiNDRiYWI5MDJhY2U2ZGZmN2MzNjBlZmFhODVhjx41NQ==: 00:33:49.979 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:01:MzVkMzE0ODljYTM3Y2VkYTUwOGE3YjZkNTM3NjE2NTMvSQcC: --dhchap-ctrl-secret DHHC-1:02:Mzg2YTViNDhhM2Q5ZWUyZWU0NzdiNDRiYWI5MDJhY2U2ZGZmN2MzNjBlZmFhODVhjx41NQ==: 00:33:50.545 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:50.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:50.545 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:33:50.545 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.545 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:50.545 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.545 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:50.545 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:50.545 20:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:51.112 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:33:51.112 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:51.112 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:33:51.112 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:33:51.112 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:33:51.112 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:51.112 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:51.112 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.112 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:51.112 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.112 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:51.112 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:51.112 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:51.370 00:33:51.370 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:51.370 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:51.370 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:51.628 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:51.628 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:51.628 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.628 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:51.628 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.628 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:51.628 { 00:33:51.628 "auth": { 00:33:51.628 "dhgroup": "ffdhe4096", 00:33:51.628 "digest": "sha256", 00:33:51.628 "state": "completed" 00:33:51.628 }, 00:33:51.628 "cntlid": 29, 00:33:51.628 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:33:51.628 "listen_address": { 00:33:51.628 "adrfam": "IPv4", 00:33:51.628 "traddr": "10.0.0.3", 00:33:51.628 "trsvcid": "4420", 00:33:51.628 "trtype": "TCP" 00:33:51.628 }, 00:33:51.628 "peer_address": { 00:33:51.628 "adrfam": "IPv4", 00:33:51.628 "traddr": "10.0.0.1", 00:33:51.628 "trsvcid": "41472", 00:33:51.628 "trtype": "TCP" 00:33:51.628 }, 00:33:51.628 "qid": 0, 00:33:51.628 "state": "enabled", 00:33:51.628 "thread": "nvmf_tgt_poll_group_000" 00:33:51.628 } 00:33:51.628 ]' 00:33:51.628 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:51.628 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:33:51.628 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:51.886 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:33:51.886 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:51.886 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:51.886 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:51.886 20:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:52.145 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: --dhchap-ctrl-secret DHHC-1:01:NjRkNDBmZTc4NTA4MDM1MjQ0MjNkMGRlMDA5Y2U0N2JVcZES: 00:33:52.145 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: --dhchap-ctrl-secret DHHC-1:01:NjRkNDBmZTc4NTA4MDM1MjQ0MjNkMGRlMDA5Y2U0N2JVcZES: 00:33:52.712 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:52.970 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:52.970 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:33:52.970 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.970 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:52.970 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.970 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:52.970 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:52.971 20:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:53.230 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:33:53.230 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:53.230 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:33:53.230 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:33:53.230 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:33:53.230 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:53.230 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key3 00:33:53.230 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.230 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:53.230 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.230 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:33:53.230 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:33:53.230 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:33:53.487 00:33:53.487 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:53.487 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:53.487 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:53.745 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:53.745 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:53.745 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.745 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:53.745 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.745 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:53.745 { 00:33:53.745 "auth": { 00:33:53.745 "dhgroup": "ffdhe4096", 00:33:53.745 "digest": "sha256", 00:33:53.745 "state": "completed" 00:33:53.745 }, 00:33:53.745 "cntlid": 31, 00:33:53.745 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:33:53.745 "listen_address": { 00:33:53.745 "adrfam": "IPv4", 00:33:53.745 "traddr": "10.0.0.3", 00:33:53.745 "trsvcid": "4420", 00:33:53.745 "trtype": "TCP" 00:33:53.745 }, 00:33:53.745 "peer_address": { 00:33:53.745 "adrfam": "IPv4", 00:33:53.745 "traddr": "10.0.0.1", 00:33:53.745 "trsvcid": "55984", 00:33:53.745 "trtype": "TCP" 00:33:53.745 }, 00:33:53.745 "qid": 0, 00:33:53.745 "state": "enabled", 00:33:53.745 "thread": "nvmf_tgt_poll_group_000" 00:33:53.745 } 00:33:53.745 ]' 00:33:53.745 20:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:54.003 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:33:54.003 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:54.003 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:33:54.003 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:54.003 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:54.003 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:54.003 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:54.261 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTA0YjI2NDliN2M2MTViNjUwZWMwOWEzNzZlNzQwYzIwZjYxOTc0NWQwODczNzliM2Y3ODU3MjJmZDIzNTAzNd3SHfc=: 00:33:54.261 20:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:03:YTA0YjI2NDliN2M2MTViNjUwZWMwOWEzNzZlNzQwYzIwZjYxOTc0NWQwODczNzliM2Y3ODU3MjJmZDIzNTAzNd3SHfc=: 00:33:55.197 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:55.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:55.197 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:33:55.197 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.197 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:55.197 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.198 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:33:55.198 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:55.198 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:55.198 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:55.198 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:33:55.198 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:55.198 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:33:55.198 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:33:55.198 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:33:55.198 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:55.198 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:55.198 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.198 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:55.198 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.198 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:55.198 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:55.198 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:55.763 00:33:55.763 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:55.763 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:55.763 20:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:56.373 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.373 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:56.373 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.373 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:56.373 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.373 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:56.373 { 00:33:56.373 "auth": { 00:33:56.373 "dhgroup": "ffdhe6144", 00:33:56.373 "digest": "sha256", 00:33:56.373 "state": "completed" 00:33:56.373 }, 00:33:56.373 "cntlid": 33, 00:33:56.373 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:33:56.373 "listen_address": { 00:33:56.373 "adrfam": "IPv4", 00:33:56.373 "traddr": "10.0.0.3", 00:33:56.373 "trsvcid": "4420", 00:33:56.373 "trtype": "TCP" 00:33:56.373 }, 00:33:56.373 "peer_address": { 00:33:56.373 "adrfam": "IPv4", 00:33:56.373 "traddr": "10.0.0.1", 00:33:56.373 "trsvcid": "56016", 00:33:56.373 "trtype": "TCP" 00:33:56.373 }, 00:33:56.373 "qid": 0, 00:33:56.373 "state": "enabled", 00:33:56.373 "thread": "nvmf_tgt_poll_group_000" 00:33:56.373 } 00:33:56.373 ]' 00:33:56.373 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:56.373 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:33:56.373 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:56.373 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:33:56.373 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:56.373 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:56.373 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:56.373 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:56.631 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzU0ZGViMmZiMWZkZDNmZjg2ZGMzMzMwMDFkMzAwZTJmZDYwNDRlYzAxNDJhNmMwV9F6jg==: --dhchap-ctrl-secret DHHC-1:03:YWVmZGM5ZjQ2MGUzNGRjMjY4OTdmNTliY2NhYmEwMGI3NTc0YjhjZjZlMTY0ODA3MTYxN2JhYzYwNDk0MjgxMJS15DY=: 00:33:56.631 20:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:00:NzU0ZGViMmZiMWZkZDNmZjg2ZGMzMzMwMDFkMzAwZTJmZDYwNDRlYzAxNDJhNmMwV9F6jg==: --dhchap-ctrl-secret DHHC-1:03:YWVmZGM5ZjQ2MGUzNGRjMjY4OTdmNTliY2NhYmEwMGI3NTc0YjhjZjZlMTY0ODA3MTYxN2JhYzYwNDk0MjgxMJS15DY=: 00:33:57.567 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:57.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:57.567 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:33:57.567 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.567 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:57.567 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.567 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:57.567 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:57.567 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:57.826 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:33:57.826 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:57.826 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:33:57.826 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:33:57.826 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:33:57.826 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:57.826 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:57.826 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.826 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:57.826 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.826 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:57.826 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:57.826 20:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:58.084 00:33:58.342 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:58.342 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:58.342 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:58.600 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.600 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:58.600 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.600 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:58.600 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.601 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:58.601 { 00:33:58.601 "auth": { 00:33:58.601 "dhgroup": "ffdhe6144", 00:33:58.601 "digest": "sha256", 00:33:58.601 "state": "completed" 00:33:58.601 }, 00:33:58.601 "cntlid": 35, 00:33:58.601 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:33:58.601 "listen_address": { 00:33:58.601 "adrfam": "IPv4", 00:33:58.601 "traddr": "10.0.0.3", 00:33:58.601 "trsvcid": "4420", 00:33:58.601 "trtype": "TCP" 00:33:58.601 }, 00:33:58.601 "peer_address": { 00:33:58.601 "adrfam": "IPv4", 00:33:58.601 "traddr": "10.0.0.1", 00:33:58.601 "trsvcid": "56048", 00:33:58.601 "trtype": "TCP" 00:33:58.601 }, 00:33:58.601 "qid": 0, 00:33:58.601 "state": "enabled", 00:33:58.601 "thread": "nvmf_tgt_poll_group_000" 00:33:58.601 } 00:33:58.601 ]' 00:33:58.601 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:58.601 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:33:58.601 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:58.601 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:33:58.601 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:58.601 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:58.601 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:58.601 20:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:59.167 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzVkMzE0ODljYTM3Y2VkYTUwOGE3YjZkNTM3NjE2NTMvSQcC: --dhchap-ctrl-secret DHHC-1:02:Mzg2YTViNDhhM2Q5ZWUyZWU0NzdiNDRiYWI5MDJhY2U2ZGZmN2MzNjBlZmFhODVhjx41NQ==: 00:33:59.167 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:01:MzVkMzE0ODljYTM3Y2VkYTUwOGE3YjZkNTM3NjE2NTMvSQcC: --dhchap-ctrl-secret DHHC-1:02:Mzg2YTViNDhhM2Q5ZWUyZWU0NzdiNDRiYWI5MDJhY2U2ZGZmN2MzNjBlZmFhODVhjx41NQ==: 00:33:59.733 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:59.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:59.733 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:33:59.733 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.733 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:59.733 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.733 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:59.734 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:59.734 20:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:59.992 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:33:59.992 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:59.992 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:33:59.992 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:33:59.992 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:33:59.992 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:59.992 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:59.992 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.992 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:59.992 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.992 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:59.992 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:59.992 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:00.560 00:34:00.560 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:34:00.560 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:00.560 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:34:00.818 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.818 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:00.818 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.818 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:00.818 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.818 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:34:00.818 { 00:34:00.818 "auth": { 00:34:00.818 "dhgroup": "ffdhe6144", 00:34:00.818 "digest": "sha256", 00:34:00.818 "state": "completed" 00:34:00.818 }, 00:34:00.818 "cntlid": 37, 00:34:00.818 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:34:00.818 "listen_address": { 00:34:00.818 "adrfam": "IPv4", 00:34:00.818 "traddr": "10.0.0.3", 00:34:00.818 "trsvcid": "4420", 00:34:00.818 "trtype": "TCP" 00:34:00.818 }, 00:34:00.818 "peer_address": { 00:34:00.818 "adrfam": "IPv4", 00:34:00.818 "traddr": "10.0.0.1", 00:34:00.818 "trsvcid": "56078", 00:34:00.818 "trtype": "TCP" 00:34:00.818 }, 00:34:00.818 "qid": 0, 00:34:00.818 "state": "enabled", 00:34:00.818 "thread": "nvmf_tgt_poll_group_000" 00:34:00.818 } 00:34:00.818 ]' 00:34:00.818 20:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:34:00.818 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:34:00.818 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:34:01.077 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:34:01.077 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:34:01.077 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:01.077 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:01.077 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:01.335 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: --dhchap-ctrl-secret DHHC-1:01:NjRkNDBmZTc4NTA4MDM1MjQ0MjNkMGRlMDA5Y2U0N2JVcZES: 00:34:01.335 20:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: --dhchap-ctrl-secret DHHC-1:01:NjRkNDBmZTc4NTA4MDM1MjQ0MjNkMGRlMDA5Y2U0N2JVcZES: 00:34:01.901 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:01.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:01.901 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:34:01.901 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.901 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:01.901 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.901 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:34:01.901 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:01.901 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:02.159 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:34:02.159 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:34:02.159 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:34:02.159 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:34:02.159 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:34:02.159 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:02.160 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key3 00:34:02.160 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.160 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:02.160 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.160 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:34:02.160 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:34:02.160 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:34:02.726 00:34:02.726 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:34:02.726 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:34:02.726 20:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:02.984 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.984 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:02.984 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.984 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:03.242 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.242 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:34:03.242 { 00:34:03.242 "auth": { 00:34:03.242 "dhgroup": "ffdhe6144", 00:34:03.242 "digest": "sha256", 00:34:03.242 "state": "completed" 00:34:03.242 }, 00:34:03.242 "cntlid": 39, 00:34:03.242 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:34:03.242 "listen_address": { 00:34:03.242 "adrfam": "IPv4", 00:34:03.242 "traddr": "10.0.0.3", 00:34:03.242 "trsvcid": "4420", 00:34:03.242 "trtype": "TCP" 00:34:03.242 }, 00:34:03.242 "peer_address": { 00:34:03.242 "adrfam": "IPv4", 00:34:03.242 "traddr": "10.0.0.1", 00:34:03.242 "trsvcid": "48280", 00:34:03.242 "trtype": "TCP" 00:34:03.242 }, 00:34:03.242 "qid": 0, 00:34:03.242 "state": "enabled", 00:34:03.242 "thread": "nvmf_tgt_poll_group_000" 00:34:03.242 } 00:34:03.242 ]' 00:34:03.242 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:34:03.242 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:34:03.242 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:34:03.242 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:34:03.242 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:34:03.242 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:03.242 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:03.242 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:03.501 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTA0YjI2NDliN2M2MTViNjUwZWMwOWEzNzZlNzQwYzIwZjYxOTc0NWQwODczNzliM2Y3ODU3MjJmZDIzNTAzNd3SHfc=: 00:34:03.501 20:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:03:YTA0YjI2NDliN2M2MTViNjUwZWMwOWEzNzZlNzQwYzIwZjYxOTc0NWQwODczNzliM2Y3ODU3MjJmZDIzNTAzNd3SHfc=: 00:34:04.433 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:04.434 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:04.434 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:34:04.434 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.434 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:04.434 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.434 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:34:04.434 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:34:04.434 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:04.434 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:04.691 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:34:04.691 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:34:04.691 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:34:04.691 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:34:04.691 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:34:04.691 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:04.691 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:04.691 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.691 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:04.691 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.691 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:04.691 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:04.691 20:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:05.256 00:34:05.256 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:34:05.256 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:05.256 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:34:05.514 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.514 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:05.514 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.514 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:05.514 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.514 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:34:05.514 { 00:34:05.514 "auth": { 00:34:05.514 "dhgroup": "ffdhe8192", 00:34:05.514 "digest": "sha256", 00:34:05.514 "state": "completed" 00:34:05.514 }, 00:34:05.514 "cntlid": 41, 00:34:05.514 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:34:05.514 "listen_address": { 00:34:05.514 "adrfam": "IPv4", 00:34:05.514 "traddr": "10.0.0.3", 00:34:05.514 "trsvcid": "4420", 00:34:05.514 "trtype": "TCP" 00:34:05.514 }, 00:34:05.514 "peer_address": { 00:34:05.514 "adrfam": "IPv4", 00:34:05.514 "traddr": "10.0.0.1", 00:34:05.514 "trsvcid": "48312", 00:34:05.514 "trtype": "TCP" 00:34:05.514 }, 00:34:05.514 "qid": 0, 00:34:05.514 "state": "enabled", 00:34:05.514 "thread": "nvmf_tgt_poll_group_000" 00:34:05.514 } 00:34:05.514 ]' 00:34:05.514 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:34:05.772 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:34:05.772 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:34:05.772 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:34:05.772 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:34:05.772 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:05.772 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:05.772 20:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:06.030 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzU0ZGViMmZiMWZkZDNmZjg2ZGMzMzMwMDFkMzAwZTJmZDYwNDRlYzAxNDJhNmMwV9F6jg==: --dhchap-ctrl-secret DHHC-1:03:YWVmZGM5ZjQ2MGUzNGRjMjY4OTdmNTliY2NhYmEwMGI3NTc0YjhjZjZlMTY0ODA3MTYxN2JhYzYwNDk0MjgxMJS15DY=: 00:34:06.030 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:00:NzU0ZGViMmZiMWZkZDNmZjg2ZGMzMzMwMDFkMzAwZTJmZDYwNDRlYzAxNDJhNmMwV9F6jg==: --dhchap-ctrl-secret DHHC-1:03:YWVmZGM5ZjQ2MGUzNGRjMjY4OTdmNTliY2NhYmEwMGI3NTc0YjhjZjZlMTY0ODA3MTYxN2JhYzYwNDk0MjgxMJS15DY=: 00:34:06.965 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:06.965 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:06.965 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:34:06.965 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.965 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:06.965 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.965 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:34:06.965 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:06.965 20:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:07.223 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:34:07.223 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:34:07.223 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:34:07.223 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:34:07.223 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:34:07.223 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:07.223 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:07.223 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.223 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:07.223 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.223 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:07.223 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:07.223 20:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:07.788 00:34:07.788 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:34:07.788 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:34:07.788 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:08.354 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.354 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:08.354 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.354 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:08.354 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.354 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:34:08.354 { 00:34:08.354 "auth": { 00:34:08.354 "dhgroup": "ffdhe8192", 00:34:08.354 "digest": "sha256", 00:34:08.354 "state": "completed" 00:34:08.354 }, 00:34:08.354 "cntlid": 43, 00:34:08.354 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:34:08.354 "listen_address": { 00:34:08.354 "adrfam": "IPv4", 00:34:08.354 "traddr": "10.0.0.3", 00:34:08.354 "trsvcid": "4420", 00:34:08.354 "trtype": "TCP" 00:34:08.354 }, 00:34:08.354 "peer_address": { 00:34:08.354 "adrfam": "IPv4", 00:34:08.354 "traddr": "10.0.0.1", 00:34:08.354 "trsvcid": "48346", 00:34:08.354 "trtype": "TCP" 00:34:08.354 }, 00:34:08.354 "qid": 0, 00:34:08.354 "state": "enabled", 00:34:08.354 "thread": "nvmf_tgt_poll_group_000" 00:34:08.354 } 00:34:08.354 ]' 00:34:08.354 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:34:08.354 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:34:08.354 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:34:08.354 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:34:08.354 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:34:08.354 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:08.354 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:08.354 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:08.919 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzVkMzE0ODljYTM3Y2VkYTUwOGE3YjZkNTM3NjE2NTMvSQcC: --dhchap-ctrl-secret DHHC-1:02:Mzg2YTViNDhhM2Q5ZWUyZWU0NzdiNDRiYWI5MDJhY2U2ZGZmN2MzNjBlZmFhODVhjx41NQ==: 00:34:08.919 20:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:01:MzVkMzE0ODljYTM3Y2VkYTUwOGE3YjZkNTM3NjE2NTMvSQcC: --dhchap-ctrl-secret DHHC-1:02:Mzg2YTViNDhhM2Q5ZWUyZWU0NzdiNDRiYWI5MDJhY2U2ZGZmN2MzNjBlZmFhODVhjx41NQ==: 00:34:09.486 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:09.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:09.486 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:34:09.486 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.486 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:09.486 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.486 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:34:09.486 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:09.486 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:09.745 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:34:09.745 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:34:09.745 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:34:09.745 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:34:09.745 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:34:09.745 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:09.745 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:09.745 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.745 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:09.745 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.745 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:09.745 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:09.745 20:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:10.679 00:34:10.679 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:34:10.679 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:10.679 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:34:10.938 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.938 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:10.938 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.938 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:10.938 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.938 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:34:10.938 { 00:34:10.938 "auth": { 00:34:10.938 "dhgroup": "ffdhe8192", 00:34:10.938 "digest": "sha256", 00:34:10.938 "state": "completed" 00:34:10.938 }, 00:34:10.938 "cntlid": 45, 00:34:10.939 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:34:10.939 "listen_address": { 00:34:10.939 "adrfam": "IPv4", 00:34:10.939 "traddr": "10.0.0.3", 00:34:10.939 "trsvcid": "4420", 00:34:10.939 "trtype": "TCP" 00:34:10.939 }, 00:34:10.939 "peer_address": { 00:34:10.939 "adrfam": "IPv4", 00:34:10.939 "traddr": "10.0.0.1", 00:34:10.939 "trsvcid": "48378", 00:34:10.939 "trtype": "TCP" 00:34:10.939 }, 00:34:10.939 "qid": 0, 00:34:10.939 "state": "enabled", 00:34:10.939 "thread": "nvmf_tgt_poll_group_000" 00:34:10.939 } 00:34:10.939 ]' 00:34:10.939 20:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:34:10.939 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:34:10.939 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:34:10.939 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:34:10.939 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:34:10.939 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:10.939 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:10.939 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:11.197 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: --dhchap-ctrl-secret DHHC-1:01:NjRkNDBmZTc4NTA4MDM1MjQ0MjNkMGRlMDA5Y2U0N2JVcZES: 00:34:11.197 20:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: --dhchap-ctrl-secret DHHC-1:01:NjRkNDBmZTc4NTA4MDM1MjQ0MjNkMGRlMDA5Y2U0N2JVcZES: 00:34:12.155 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:12.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:12.155 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:34:12.155 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.155 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:12.155 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.155 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:34:12.155 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:12.155 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:12.414 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:34:12.414 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:34:12.414 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:34:12.414 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:34:12.414 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:34:12.414 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:12.414 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key3 00:34:12.414 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.414 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:12.414 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.414 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:34:12.414 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:34:12.414 20:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:34:12.981 00:34:12.981 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:34:12.981 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:34:12.981 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:13.240 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.240 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:13.240 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.240 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:13.240 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.240 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:34:13.240 { 00:34:13.240 "auth": { 00:34:13.240 "dhgroup": "ffdhe8192", 00:34:13.240 "digest": "sha256", 00:34:13.240 "state": "completed" 00:34:13.240 }, 00:34:13.240 "cntlid": 47, 00:34:13.240 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:34:13.240 "listen_address": { 00:34:13.240 "adrfam": "IPv4", 00:34:13.240 "traddr": "10.0.0.3", 00:34:13.240 "trsvcid": "4420", 00:34:13.240 "trtype": "TCP" 00:34:13.240 }, 00:34:13.240 "peer_address": { 00:34:13.240 "adrfam": "IPv4", 00:34:13.240 "traddr": "10.0.0.1", 00:34:13.240 "trsvcid": "35704", 00:34:13.240 "trtype": "TCP" 00:34:13.240 }, 00:34:13.240 "qid": 0, 00:34:13.240 "state": "enabled", 00:34:13.240 "thread": "nvmf_tgt_poll_group_000" 00:34:13.240 } 00:34:13.240 ]' 00:34:13.498 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:34:13.498 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:34:13.498 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:34:13.498 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:34:13.498 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:34:13.498 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:13.498 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:13.498 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:13.757 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTA0YjI2NDliN2M2MTViNjUwZWMwOWEzNzZlNzQwYzIwZjYxOTc0NWQwODczNzliM2Y3ODU3MjJmZDIzNTAzNd3SHfc=: 00:34:13.757 20:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:03:YTA0YjI2NDliN2M2MTViNjUwZWMwOWEzNzZlNzQwYzIwZjYxOTc0NWQwODczNzliM2Y3ODU3MjJmZDIzNTAzNd3SHfc=: 00:34:14.691 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:14.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:14.691 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:34:14.691 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.691 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:14.691 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.691 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:34:14.691 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:34:14.691 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:34:14.691 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:34:14.691 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:34:14.950 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:34:14.950 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:34:14.950 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:34:14.950 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:34:14.950 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:34:14.950 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:14.950 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:14.950 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.950 20:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:14.950 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.950 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:14.950 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:14.950 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:15.208 00:34:15.208 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:34:15.209 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:15.209 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:34:15.468 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.468 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:15.468 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.468 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:15.468 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.468 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:34:15.468 { 00:34:15.468 "auth": { 00:34:15.468 "dhgroup": "null", 00:34:15.468 "digest": "sha384", 00:34:15.468 "state": "completed" 00:34:15.468 }, 00:34:15.468 "cntlid": 49, 00:34:15.468 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:34:15.468 "listen_address": { 00:34:15.468 "adrfam": "IPv4", 00:34:15.468 "traddr": "10.0.0.3", 00:34:15.468 "trsvcid": "4420", 00:34:15.468 "trtype": "TCP" 00:34:15.468 }, 00:34:15.468 "peer_address": { 00:34:15.468 "adrfam": "IPv4", 00:34:15.468 "traddr": "10.0.0.1", 00:34:15.468 "trsvcid": "35734", 00:34:15.468 "trtype": "TCP" 00:34:15.468 }, 00:34:15.468 "qid": 0, 00:34:15.468 "state": "enabled", 00:34:15.468 "thread": "nvmf_tgt_poll_group_000" 00:34:15.468 } 00:34:15.468 ]' 00:34:15.468 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:34:15.468 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:34:15.468 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:34:15.726 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:34:15.726 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:34:15.726 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:15.726 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:15.726 20:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:15.985 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzU0ZGViMmZiMWZkZDNmZjg2ZGMzMzMwMDFkMzAwZTJmZDYwNDRlYzAxNDJhNmMwV9F6jg==: --dhchap-ctrl-secret DHHC-1:03:YWVmZGM5ZjQ2MGUzNGRjMjY4OTdmNTliY2NhYmEwMGI3NTc0YjhjZjZlMTY0ODA3MTYxN2JhYzYwNDk0MjgxMJS15DY=: 00:34:15.985 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:00:NzU0ZGViMmZiMWZkZDNmZjg2ZGMzMzMwMDFkMzAwZTJmZDYwNDRlYzAxNDJhNmMwV9F6jg==: --dhchap-ctrl-secret DHHC-1:03:YWVmZGM5ZjQ2MGUzNGRjMjY4OTdmNTliY2NhYmEwMGI3NTc0YjhjZjZlMTY0ODA3MTYxN2JhYzYwNDk0MjgxMJS15DY=: 00:34:16.552 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:16.811 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:16.811 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:34:16.811 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.811 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:16.811 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.811 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:34:16.811 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:34:16.811 20:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:34:17.070 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:34:17.070 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:34:17.071 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:34:17.071 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:34:17.071 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:34:17.071 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:17.071 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:17.071 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.071 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:17.071 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.071 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:17.071 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:17.071 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:17.330 00:34:17.330 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:34:17.330 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:34:17.330 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:17.897 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.897 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:17.897 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.897 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:17.897 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.897 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:34:17.897 { 00:34:17.897 "auth": { 00:34:17.897 "dhgroup": "null", 00:34:17.897 "digest": "sha384", 00:34:17.897 "state": "completed" 00:34:17.897 }, 00:34:17.897 "cntlid": 51, 00:34:17.897 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:34:17.897 "listen_address": { 00:34:17.897 "adrfam": "IPv4", 00:34:17.897 "traddr": "10.0.0.3", 00:34:17.897 "trsvcid": "4420", 00:34:17.897 "trtype": "TCP" 00:34:17.897 }, 00:34:17.897 "peer_address": { 00:34:17.897 "adrfam": "IPv4", 00:34:17.897 "traddr": "10.0.0.1", 00:34:17.897 "trsvcid": "35770", 00:34:17.897 "trtype": "TCP" 00:34:17.897 }, 00:34:17.897 "qid": 0, 00:34:17.897 "state": "enabled", 00:34:17.897 "thread": "nvmf_tgt_poll_group_000" 00:34:17.897 } 00:34:17.897 ]' 00:34:17.897 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:34:17.897 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:34:17.897 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:34:17.897 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:34:17.897 20:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:34:17.897 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:17.897 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:17.897 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:18.157 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzVkMzE0ODljYTM3Y2VkYTUwOGE3YjZkNTM3NjE2NTMvSQcC: --dhchap-ctrl-secret DHHC-1:02:Mzg2YTViNDhhM2Q5ZWUyZWU0NzdiNDRiYWI5MDJhY2U2ZGZmN2MzNjBlZmFhODVhjx41NQ==: 00:34:18.157 20:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:01:MzVkMzE0ODljYTM3Y2VkYTUwOGE3YjZkNTM3NjE2NTMvSQcC: --dhchap-ctrl-secret DHHC-1:02:Mzg2YTViNDhhM2Q5ZWUyZWU0NzdiNDRiYWI5MDJhY2U2ZGZmN2MzNjBlZmFhODVhjx41NQ==: 00:34:19.094 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:19.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:19.094 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:34:19.094 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.094 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:19.094 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.094 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:34:19.094 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:34:19.094 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:34:19.372 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:34:19.372 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:34:19.372 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:34:19.372 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:34:19.372 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:34:19.372 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:19.372 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:19.372 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.372 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:19.372 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.372 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:19.372 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:19.372 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:19.631 00:34:19.631 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:34:19.631 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:19.631 20:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:34:19.889 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.889 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:19.889 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.889 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:19.889 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.889 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:34:19.890 { 00:34:19.890 "auth": { 00:34:19.890 "dhgroup": "null", 00:34:19.890 "digest": "sha384", 00:34:19.890 "state": "completed" 00:34:19.890 }, 00:34:19.890 "cntlid": 53, 00:34:19.890 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:34:19.890 "listen_address": { 00:34:19.890 "adrfam": "IPv4", 00:34:19.890 "traddr": "10.0.0.3", 00:34:19.890 "trsvcid": "4420", 00:34:19.890 "trtype": "TCP" 00:34:19.890 }, 00:34:19.890 "peer_address": { 00:34:19.890 "adrfam": "IPv4", 00:34:19.890 "traddr": "10.0.0.1", 00:34:19.890 "trsvcid": "35792", 00:34:19.890 "trtype": "TCP" 00:34:19.890 }, 00:34:19.890 "qid": 0, 00:34:19.890 "state": "enabled", 00:34:19.890 "thread": "nvmf_tgt_poll_group_000" 00:34:19.890 } 00:34:19.890 ]' 00:34:19.890 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:34:19.890 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:34:19.890 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:34:20.148 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:34:20.148 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:34:20.148 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:20.148 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:20.148 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:20.407 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: --dhchap-ctrl-secret DHHC-1:01:NjRkNDBmZTc4NTA4MDM1MjQ0MjNkMGRlMDA5Y2U0N2JVcZES: 00:34:20.407 20:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: --dhchap-ctrl-secret DHHC-1:01:NjRkNDBmZTc4NTA4MDM1MjQ0MjNkMGRlMDA5Y2U0N2JVcZES: 00:34:20.974 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:20.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:20.974 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:34:20.974 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.974 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:20.974 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.975 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:34:20.975 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:34:20.975 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:34:21.542 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:34:21.542 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:34:21.542 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:34:21.542 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:34:21.542 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:34:21.542 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:21.542 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key3 00:34:21.542 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.542 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:21.542 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.542 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:34:21.542 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:34:21.542 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:34:21.801 00:34:21.801 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:34:21.801 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:34:21.801 20:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:22.060 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.060 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:22.060 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.060 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:22.060 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.060 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:34:22.060 { 00:34:22.060 "auth": { 00:34:22.060 "dhgroup": "null", 00:34:22.060 "digest": "sha384", 00:34:22.060 "state": "completed" 00:34:22.060 }, 00:34:22.060 "cntlid": 55, 00:34:22.060 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:34:22.060 "listen_address": { 00:34:22.060 "adrfam": "IPv4", 00:34:22.060 "traddr": "10.0.0.3", 00:34:22.060 "trsvcid": "4420", 00:34:22.060 "trtype": "TCP" 00:34:22.060 }, 00:34:22.060 "peer_address": { 00:34:22.060 "adrfam": "IPv4", 00:34:22.060 "traddr": "10.0.0.1", 00:34:22.060 "trsvcid": "35834", 00:34:22.060 "trtype": "TCP" 00:34:22.060 }, 00:34:22.060 "qid": 0, 00:34:22.060 "state": "enabled", 00:34:22.060 "thread": "nvmf_tgt_poll_group_000" 00:34:22.060 } 00:34:22.060 ]' 00:34:22.060 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:34:22.318 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:34:22.318 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:34:22.318 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:34:22.318 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:34:22.318 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:22.318 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:22.318 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:22.578 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTA0YjI2NDliN2M2MTViNjUwZWMwOWEzNzZlNzQwYzIwZjYxOTc0NWQwODczNzliM2Y3ODU3MjJmZDIzNTAzNd3SHfc=: 00:34:22.578 20:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:03:YTA0YjI2NDliN2M2MTViNjUwZWMwOWEzNzZlNzQwYzIwZjYxOTc0NWQwODczNzliM2Y3ODU3MjJmZDIzNTAzNd3SHfc=: 00:34:23.514 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:23.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:23.514 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:34:23.514 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.514 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:23.514 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.514 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:34:23.514 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:34:23.514 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:23.514 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:23.772 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:34:23.772 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:34:23.772 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:34:23.772 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:34:23.772 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:34:23.772 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:23.772 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:23.772 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.772 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:23.772 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.772 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:23.772 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:23.772 20:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:24.030 00:34:24.030 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:34:24.030 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:34:24.030 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:24.289 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.289 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:24.289 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.289 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:24.548 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.548 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:34:24.548 { 00:34:24.548 "auth": { 00:34:24.548 "dhgroup": "ffdhe2048", 00:34:24.548 "digest": "sha384", 00:34:24.548 "state": "completed" 00:34:24.548 }, 00:34:24.548 "cntlid": 57, 00:34:24.548 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:34:24.548 "listen_address": { 00:34:24.548 "adrfam": "IPv4", 00:34:24.548 "traddr": "10.0.0.3", 00:34:24.548 "trsvcid": "4420", 00:34:24.548 "trtype": "TCP" 00:34:24.548 }, 00:34:24.548 "peer_address": { 00:34:24.548 "adrfam": "IPv4", 00:34:24.548 "traddr": "10.0.0.1", 00:34:24.548 "trsvcid": "51216", 00:34:24.548 "trtype": "TCP" 00:34:24.548 }, 00:34:24.548 "qid": 0, 00:34:24.548 "state": "enabled", 00:34:24.548 "thread": "nvmf_tgt_poll_group_000" 00:34:24.548 } 00:34:24.548 ]' 00:34:24.548 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:34:24.548 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:34:24.548 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:34:24.548 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:34:24.548 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:34:24.548 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:24.548 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:24.548 20:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:24.809 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzU0ZGViMmZiMWZkZDNmZjg2ZGMzMzMwMDFkMzAwZTJmZDYwNDRlYzAxNDJhNmMwV9F6jg==: --dhchap-ctrl-secret DHHC-1:03:YWVmZGM5ZjQ2MGUzNGRjMjY4OTdmNTliY2NhYmEwMGI3NTc0YjhjZjZlMTY0ODA3MTYxN2JhYzYwNDk0MjgxMJS15DY=: 00:34:24.809 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:00:NzU0ZGViMmZiMWZkZDNmZjg2ZGMzMzMwMDFkMzAwZTJmZDYwNDRlYzAxNDJhNmMwV9F6jg==: --dhchap-ctrl-secret DHHC-1:03:YWVmZGM5ZjQ2MGUzNGRjMjY4OTdmNTliY2NhYmEwMGI3NTc0YjhjZjZlMTY0ODA3MTYxN2JhYzYwNDk0MjgxMJS15DY=: 00:34:25.778 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:25.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:25.778 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:34:25.778 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.778 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:25.778 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.778 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:34:25.778 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:25.778 20:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:26.037 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:34:26.037 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:34:26.037 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:34:26.037 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:34:26.037 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:34:26.037 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:26.037 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:26.037 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.037 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:26.037 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.037 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:26.037 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:26.037 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:26.296 00:34:26.296 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:34:26.296 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:34:26.296 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:26.554 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.554 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:26.554 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.554 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:26.812 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.812 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:34:26.812 { 00:34:26.812 "auth": { 00:34:26.812 "dhgroup": "ffdhe2048", 00:34:26.812 "digest": "sha384", 00:34:26.812 "state": "completed" 00:34:26.812 }, 00:34:26.812 "cntlid": 59, 00:34:26.812 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:34:26.812 "listen_address": { 00:34:26.812 "adrfam": "IPv4", 00:34:26.812 "traddr": "10.0.0.3", 00:34:26.812 "trsvcid": "4420", 00:34:26.812 "trtype": "TCP" 00:34:26.812 }, 00:34:26.812 "peer_address": { 00:34:26.812 "adrfam": "IPv4", 00:34:26.812 "traddr": "10.0.0.1", 00:34:26.812 "trsvcid": "51240", 00:34:26.812 "trtype": "TCP" 00:34:26.812 }, 00:34:26.812 "qid": 0, 00:34:26.812 "state": "enabled", 00:34:26.812 "thread": "nvmf_tgt_poll_group_000" 00:34:26.812 } 00:34:26.812 ]' 00:34:26.812 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:34:26.812 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:34:26.812 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:34:26.812 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:34:26.812 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:34:26.812 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:26.812 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:26.812 20:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:27.070 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzVkMzE0ODljYTM3Y2VkYTUwOGE3YjZkNTM3NjE2NTMvSQcC: --dhchap-ctrl-secret DHHC-1:02:Mzg2YTViNDhhM2Q5ZWUyZWU0NzdiNDRiYWI5MDJhY2U2ZGZmN2MzNjBlZmFhODVhjx41NQ==: 00:34:27.071 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:01:MzVkMzE0ODljYTM3Y2VkYTUwOGE3YjZkNTM3NjE2NTMvSQcC: --dhchap-ctrl-secret DHHC-1:02:Mzg2YTViNDhhM2Q5ZWUyZWU0NzdiNDRiYWI5MDJhY2U2ZGZmN2MzNjBlZmFhODVhjx41NQ==: 00:34:28.007 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:28.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:28.007 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:34:28.007 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.007 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:28.007 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.007 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:34:28.007 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:28.007 20:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:28.266 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:34:28.266 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:34:28.266 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:34:28.266 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:34:28.266 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:34:28.266 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:28.266 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:28.266 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.266 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:28.266 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.266 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:28.266 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:28.266 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:28.525 00:34:28.525 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:34:28.525 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:34:28.525 20:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:28.783 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.783 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:28.783 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.783 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:28.783 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.783 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:34:28.783 { 00:34:28.783 "auth": { 00:34:28.783 "dhgroup": "ffdhe2048", 00:34:28.783 "digest": "sha384", 00:34:28.783 "state": "completed" 00:34:28.783 }, 00:34:28.783 "cntlid": 61, 00:34:28.783 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:34:28.783 "listen_address": { 00:34:28.783 "adrfam": "IPv4", 00:34:28.783 "traddr": "10.0.0.3", 00:34:28.783 "trsvcid": "4420", 00:34:28.783 "trtype": "TCP" 00:34:28.783 }, 00:34:28.783 "peer_address": { 00:34:28.783 "adrfam": "IPv4", 00:34:28.783 "traddr": "10.0.0.1", 00:34:28.783 "trsvcid": "51266", 00:34:28.783 "trtype": "TCP" 00:34:28.783 }, 00:34:28.783 "qid": 0, 00:34:28.783 "state": "enabled", 00:34:28.783 "thread": "nvmf_tgt_poll_group_000" 00:34:28.783 } 00:34:28.783 ]' 00:34:28.783 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:34:29.042 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:34:29.042 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:34:29.042 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:34:29.042 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:34:29.042 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:29.042 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:29.042 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:29.301 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: --dhchap-ctrl-secret DHHC-1:01:NjRkNDBmZTc4NTA4MDM1MjQ0MjNkMGRlMDA5Y2U0N2JVcZES: 00:34:29.301 20:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: --dhchap-ctrl-secret DHHC-1:01:NjRkNDBmZTc4NTA4MDM1MjQ0MjNkMGRlMDA5Y2U0N2JVcZES: 00:34:30.232 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:30.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:30.232 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:34:30.232 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.232 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:30.232 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.232 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:34:30.232 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:30.232 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:30.564 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:34:30.564 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:34:30.564 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:34:30.564 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:34:30.564 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:34:30.564 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:30.564 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key3 00:34:30.564 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.564 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:30.564 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.564 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:34:30.564 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:34:30.564 20:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:34:30.829 00:34:30.829 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:34:30.829 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:34:30.829 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:31.397 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.397 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:31.397 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.397 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:31.397 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.397 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:34:31.397 { 00:34:31.397 "auth": { 00:34:31.397 "dhgroup": "ffdhe2048", 00:34:31.397 "digest": "sha384", 00:34:31.397 "state": "completed" 00:34:31.397 }, 00:34:31.397 "cntlid": 63, 00:34:31.397 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:34:31.397 "listen_address": { 00:34:31.397 "adrfam": "IPv4", 00:34:31.397 "traddr": "10.0.0.3", 00:34:31.397 "trsvcid": "4420", 00:34:31.397 "trtype": "TCP" 00:34:31.397 }, 00:34:31.397 "peer_address": { 00:34:31.397 "adrfam": "IPv4", 00:34:31.397 "traddr": "10.0.0.1", 00:34:31.397 "trsvcid": "51296", 00:34:31.397 "trtype": "TCP" 00:34:31.397 }, 00:34:31.397 "qid": 0, 00:34:31.397 "state": "enabled", 00:34:31.397 "thread": "nvmf_tgt_poll_group_000" 00:34:31.397 } 00:34:31.397 ]' 00:34:31.397 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:34:31.397 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:34:31.397 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:34:31.397 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:34:31.397 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:34:31.397 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:31.397 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:31.397 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:31.656 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTA0YjI2NDliN2M2MTViNjUwZWMwOWEzNzZlNzQwYzIwZjYxOTc0NWQwODczNzliM2Y3ODU3MjJmZDIzNTAzNd3SHfc=: 00:34:31.656 20:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:03:YTA0YjI2NDliN2M2MTViNjUwZWMwOWEzNzZlNzQwYzIwZjYxOTc0NWQwODczNzliM2Y3ODU3MjJmZDIzNTAzNd3SHfc=: 00:34:32.594 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:32.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:32.594 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:34:32.594 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.594 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:32.594 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.594 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:34:32.594 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:34:32.594 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:32.594 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:32.853 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:34:32.853 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:34:32.853 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:34:32.853 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:34:32.853 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:34:32.853 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:32.853 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:32.853 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.853 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:32.853 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.853 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:32.853 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:32.853 20:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:33.111 00:34:33.111 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:34:33.111 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:34:33.111 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:33.370 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.370 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:33.370 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.370 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:33.370 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.370 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:34:33.370 { 00:34:33.370 "auth": { 00:34:33.370 "dhgroup": "ffdhe3072", 00:34:33.370 "digest": "sha384", 00:34:33.370 "state": "completed" 00:34:33.370 }, 00:34:33.370 "cntlid": 65, 00:34:33.370 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:34:33.370 "listen_address": { 00:34:33.370 "adrfam": "IPv4", 00:34:33.370 "traddr": "10.0.0.3", 00:34:33.371 "trsvcid": "4420", 00:34:33.371 "trtype": "TCP" 00:34:33.371 }, 00:34:33.371 "peer_address": { 00:34:33.371 "adrfam": "IPv4", 00:34:33.371 "traddr": "10.0.0.1", 00:34:33.371 "trsvcid": "56824", 00:34:33.371 "trtype": "TCP" 00:34:33.371 }, 00:34:33.371 "qid": 0, 00:34:33.371 "state": "enabled", 00:34:33.371 "thread": "nvmf_tgt_poll_group_000" 00:34:33.371 } 00:34:33.371 ]' 00:34:33.371 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:34:33.630 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:34:33.630 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:34:33.630 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:34:33.630 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:34:33.630 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:33.630 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:33.630 20:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:33.888 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzU0ZGViMmZiMWZkZDNmZjg2ZGMzMzMwMDFkMzAwZTJmZDYwNDRlYzAxNDJhNmMwV9F6jg==: --dhchap-ctrl-secret DHHC-1:03:YWVmZGM5ZjQ2MGUzNGRjMjY4OTdmNTliY2NhYmEwMGI3NTc0YjhjZjZlMTY0ODA3MTYxN2JhYzYwNDk0MjgxMJS15DY=: 00:34:33.888 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:00:NzU0ZGViMmZiMWZkZDNmZjg2ZGMzMzMwMDFkMzAwZTJmZDYwNDRlYzAxNDJhNmMwV9F6jg==: --dhchap-ctrl-secret DHHC-1:03:YWVmZGM5ZjQ2MGUzNGRjMjY4OTdmNTliY2NhYmEwMGI3NTc0YjhjZjZlMTY0ODA3MTYxN2JhYzYwNDk0MjgxMJS15DY=: 00:34:34.825 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:34.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:34.825 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:34:34.825 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.825 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:34.825 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.825 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:34:34.825 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:34.825 20:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:35.083 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:34:35.083 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:34:35.083 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:34:35.083 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:34:35.083 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:34:35.083 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:35.083 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:35.083 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.084 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:35.084 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.084 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:35.084 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:35.084 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:35.342 00:34:35.342 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:34:35.342 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:34:35.342 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:35.911 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.911 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:35.911 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.911 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:35.911 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.911 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:34:35.911 { 00:34:35.911 "auth": { 00:34:35.911 "dhgroup": "ffdhe3072", 00:34:35.911 "digest": "sha384", 00:34:35.911 "state": "completed" 00:34:35.911 }, 00:34:35.911 "cntlid": 67, 00:34:35.911 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:34:35.911 "listen_address": { 00:34:35.911 "adrfam": "IPv4", 00:34:35.911 "traddr": "10.0.0.3", 00:34:35.911 "trsvcid": "4420", 00:34:35.911 "trtype": "TCP" 00:34:35.911 }, 00:34:35.911 "peer_address": { 00:34:35.911 "adrfam": "IPv4", 00:34:35.911 "traddr": "10.0.0.1", 00:34:35.911 "trsvcid": "56852", 00:34:35.911 "trtype": "TCP" 00:34:35.911 }, 00:34:35.911 "qid": 0, 00:34:35.911 "state": "enabled", 00:34:35.911 "thread": "nvmf_tgt_poll_group_000" 00:34:35.911 } 00:34:35.911 ]' 00:34:35.911 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:34:35.911 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:34:35.911 20:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:34:35.911 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:34:35.911 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:34:35.911 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:35.911 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:35.911 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:36.171 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzVkMzE0ODljYTM3Y2VkYTUwOGE3YjZkNTM3NjE2NTMvSQcC: --dhchap-ctrl-secret DHHC-1:02:Mzg2YTViNDhhM2Q5ZWUyZWU0NzdiNDRiYWI5MDJhY2U2ZGZmN2MzNjBlZmFhODVhjx41NQ==: 00:34:36.171 20:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:01:MzVkMzE0ODljYTM3Y2VkYTUwOGE3YjZkNTM3NjE2NTMvSQcC: --dhchap-ctrl-secret DHHC-1:02:Mzg2YTViNDhhM2Q5ZWUyZWU0NzdiNDRiYWI5MDJhY2U2ZGZmN2MzNjBlZmFhODVhjx41NQ==: 00:34:37.107 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:37.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:37.107 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:34:37.107 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.107 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:37.107 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.107 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:34:37.107 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:37.107 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:37.366 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:34:37.366 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:34:37.366 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:34:37.366 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:34:37.366 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:34:37.366 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:37.366 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:37.366 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.366 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:37.367 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.367 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:37.367 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:37.367 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:37.625 00:34:37.884 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:34:37.884 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:34:37.884 20:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:38.142 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.142 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:38.142 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.142 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:38.142 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.142 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:34:38.142 { 00:34:38.142 "auth": { 00:34:38.142 "dhgroup": "ffdhe3072", 00:34:38.142 "digest": "sha384", 00:34:38.142 "state": "completed" 00:34:38.142 }, 00:34:38.142 "cntlid": 69, 00:34:38.142 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:34:38.142 "listen_address": { 00:34:38.142 "adrfam": "IPv4", 00:34:38.142 "traddr": "10.0.0.3", 00:34:38.142 "trsvcid": "4420", 00:34:38.142 "trtype": "TCP" 00:34:38.142 }, 00:34:38.142 "peer_address": { 00:34:38.142 "adrfam": "IPv4", 00:34:38.142 "traddr": "10.0.0.1", 00:34:38.142 "trsvcid": "56876", 00:34:38.142 "trtype": "TCP" 00:34:38.142 }, 00:34:38.142 "qid": 0, 00:34:38.142 "state": "enabled", 00:34:38.142 "thread": "nvmf_tgt_poll_group_000" 00:34:38.142 } 00:34:38.142 ]' 00:34:38.142 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:34:38.142 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:34:38.142 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:34:38.142 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:34:38.142 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:34:38.142 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:38.142 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:38.142 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:38.710 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: --dhchap-ctrl-secret DHHC-1:01:NjRkNDBmZTc4NTA4MDM1MjQ0MjNkMGRlMDA5Y2U0N2JVcZES: 00:34:38.710 20:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: --dhchap-ctrl-secret DHHC-1:01:NjRkNDBmZTc4NTA4MDM1MjQ0MjNkMGRlMDA5Y2U0N2JVcZES: 00:34:39.276 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:39.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:39.276 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:34:39.276 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.276 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:39.276 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.276 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:34:39.276 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:39.276 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:39.535 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:34:39.535 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:34:39.535 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:34:39.535 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:34:39.535 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:34:39.535 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:39.535 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key3 00:34:39.535 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.535 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:39.535 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.535 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:34:39.535 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:34:39.535 20:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:34:39.793 00:34:39.793 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:34:39.793 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:34:39.793 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:40.360 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.360 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:40.360 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.360 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:40.360 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.360 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:34:40.360 { 00:34:40.360 "auth": { 00:34:40.360 "dhgroup": "ffdhe3072", 00:34:40.360 "digest": "sha384", 00:34:40.360 "state": "completed" 00:34:40.360 }, 00:34:40.360 "cntlid": 71, 00:34:40.360 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:34:40.360 "listen_address": { 00:34:40.360 "adrfam": "IPv4", 00:34:40.360 "traddr": "10.0.0.3", 00:34:40.360 "trsvcid": "4420", 00:34:40.360 "trtype": "TCP" 00:34:40.360 }, 00:34:40.360 "peer_address": { 00:34:40.360 "adrfam": "IPv4", 00:34:40.360 "traddr": "10.0.0.1", 00:34:40.360 "trsvcid": "56918", 00:34:40.360 "trtype": "TCP" 00:34:40.360 }, 00:34:40.360 "qid": 0, 00:34:40.360 "state": "enabled", 00:34:40.360 "thread": "nvmf_tgt_poll_group_000" 00:34:40.360 } 00:34:40.360 ]' 00:34:40.360 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:34:40.360 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:34:40.360 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:34:40.360 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:34:40.360 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:34:40.360 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:40.360 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:40.360 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:40.621 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTA0YjI2NDliN2M2MTViNjUwZWMwOWEzNzZlNzQwYzIwZjYxOTc0NWQwODczNzliM2Y3ODU3MjJmZDIzNTAzNd3SHfc=: 00:34:40.621 20:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:03:YTA0YjI2NDliN2M2MTViNjUwZWMwOWEzNzZlNzQwYzIwZjYxOTc0NWQwODczNzliM2Y3ODU3MjJmZDIzNTAzNd3SHfc=: 00:34:41.191 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:41.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:41.449 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:34:41.449 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.449 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:41.449 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.449 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:34:41.449 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:34:41.449 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:41.449 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:41.708 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:34:41.708 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:34:41.708 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:34:41.708 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:34:41.708 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:34:41.708 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:41.708 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:41.708 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.708 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:41.708 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.708 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:41.708 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:41.708 20:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:41.967 00:34:41.967 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:34:41.967 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:41.967 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:34:42.534 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.534 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:42.534 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.534 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:42.534 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.534 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:34:42.534 { 00:34:42.534 "auth": { 00:34:42.534 "dhgroup": "ffdhe4096", 00:34:42.534 "digest": "sha384", 00:34:42.534 "state": "completed" 00:34:42.534 }, 00:34:42.534 "cntlid": 73, 00:34:42.534 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:34:42.534 "listen_address": { 00:34:42.534 "adrfam": "IPv4", 00:34:42.534 "traddr": "10.0.0.3", 00:34:42.534 "trsvcid": "4420", 00:34:42.534 "trtype": "TCP" 00:34:42.534 }, 00:34:42.534 "peer_address": { 00:34:42.534 "adrfam": "IPv4", 00:34:42.534 "traddr": "10.0.0.1", 00:34:42.534 "trsvcid": "56936", 00:34:42.534 "trtype": "TCP" 00:34:42.534 }, 00:34:42.534 "qid": 0, 00:34:42.534 "state": "enabled", 00:34:42.534 "thread": "nvmf_tgt_poll_group_000" 00:34:42.534 } 00:34:42.534 ]' 00:34:42.534 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:34:42.534 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:34:42.534 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:34:42.534 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:34:42.534 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:34:42.534 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:42.534 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:42.534 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:42.793 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzU0ZGViMmZiMWZkZDNmZjg2ZGMzMzMwMDFkMzAwZTJmZDYwNDRlYzAxNDJhNmMwV9F6jg==: --dhchap-ctrl-secret DHHC-1:03:YWVmZGM5ZjQ2MGUzNGRjMjY4OTdmNTliY2NhYmEwMGI3NTc0YjhjZjZlMTY0ODA3MTYxN2JhYzYwNDk0MjgxMJS15DY=: 00:34:42.793 20:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:00:NzU0ZGViMmZiMWZkZDNmZjg2ZGMzMzMwMDFkMzAwZTJmZDYwNDRlYzAxNDJhNmMwV9F6jg==: --dhchap-ctrl-secret DHHC-1:03:YWVmZGM5ZjQ2MGUzNGRjMjY4OTdmNTliY2NhYmEwMGI3NTc0YjhjZjZlMTY0ODA3MTYxN2JhYzYwNDk0MjgxMJS15DY=: 00:34:43.726 20:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:43.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:43.726 20:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:34:43.726 20:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.726 20:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:43.726 20:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.726 20:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:34:43.726 20:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:43.726 20:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:43.983 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:34:43.984 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:34:43.984 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:34:43.984 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:34:43.984 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:34:43.984 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:43.984 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:43.984 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.984 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:43.984 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.984 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:43.984 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:43.984 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:44.242 00:34:44.501 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:34:44.501 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:34:44.501 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:44.760 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.760 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:44.760 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.760 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:44.760 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.760 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:34:44.760 { 00:34:44.760 "auth": { 00:34:44.760 "dhgroup": "ffdhe4096", 00:34:44.760 "digest": "sha384", 00:34:44.760 "state": "completed" 00:34:44.760 }, 00:34:44.760 "cntlid": 75, 00:34:44.760 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:34:44.760 "listen_address": { 00:34:44.760 "adrfam": "IPv4", 00:34:44.760 "traddr": "10.0.0.3", 00:34:44.760 "trsvcid": "4420", 00:34:44.760 "trtype": "TCP" 00:34:44.760 }, 00:34:44.760 "peer_address": { 00:34:44.760 "adrfam": "IPv4", 00:34:44.760 "traddr": "10.0.0.1", 00:34:44.760 "trsvcid": "49416", 00:34:44.760 "trtype": "TCP" 00:34:44.760 }, 00:34:44.760 "qid": 0, 00:34:44.760 "state": "enabled", 00:34:44.760 "thread": "nvmf_tgt_poll_group_000" 00:34:44.760 } 00:34:44.760 ]' 00:34:44.760 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:34:44.760 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:34:44.760 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:34:44.760 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:34:44.760 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:34:44.760 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:44.760 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:44.760 20:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:45.328 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzVkMzE0ODljYTM3Y2VkYTUwOGE3YjZkNTM3NjE2NTMvSQcC: --dhchap-ctrl-secret DHHC-1:02:Mzg2YTViNDhhM2Q5ZWUyZWU0NzdiNDRiYWI5MDJhY2U2ZGZmN2MzNjBlZmFhODVhjx41NQ==: 00:34:45.328 20:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:01:MzVkMzE0ODljYTM3Y2VkYTUwOGE3YjZkNTM3NjE2NTMvSQcC: --dhchap-ctrl-secret DHHC-1:02:Mzg2YTViNDhhM2Q5ZWUyZWU0NzdiNDRiYWI5MDJhY2U2ZGZmN2MzNjBlZmFhODVhjx41NQ==: 00:34:45.896 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:45.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:45.896 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:34:45.896 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.896 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:45.896 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.896 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:34:45.896 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:45.896 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:46.155 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:34:46.155 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:34:46.155 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:34:46.155 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:34:46.155 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:34:46.155 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:46.155 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:46.155 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:46.155 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:46.155 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:46.155 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:46.155 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:46.155 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:46.721 00:34:46.721 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:34:46.721 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:46.721 20:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:34:46.979 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:46.979 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:46.979 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:46.979 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:46.979 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:46.979 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:34:46.979 { 00:34:46.979 "auth": { 00:34:46.979 "dhgroup": "ffdhe4096", 00:34:46.979 "digest": "sha384", 00:34:46.979 "state": "completed" 00:34:46.979 }, 00:34:46.979 "cntlid": 77, 00:34:46.979 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:34:46.979 "listen_address": { 00:34:46.979 "adrfam": "IPv4", 00:34:46.979 "traddr": "10.0.0.3", 00:34:46.979 "trsvcid": "4420", 00:34:46.979 "trtype": "TCP" 00:34:46.979 }, 00:34:46.979 "peer_address": { 00:34:46.979 "adrfam": "IPv4", 00:34:46.979 "traddr": "10.0.0.1", 00:34:46.979 "trsvcid": "49424", 00:34:46.979 "trtype": "TCP" 00:34:46.979 }, 00:34:46.979 "qid": 0, 00:34:46.979 "state": "enabled", 00:34:46.979 "thread": "nvmf_tgt_poll_group_000" 00:34:46.979 } 00:34:46.979 ]' 00:34:46.979 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:34:46.979 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:34:46.979 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:34:46.979 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:34:46.979 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:34:46.979 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:46.979 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:46.979 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:47.251 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: --dhchap-ctrl-secret DHHC-1:01:NjRkNDBmZTc4NTA4MDM1MjQ0MjNkMGRlMDA5Y2U0N2JVcZES: 00:34:47.251 20:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: --dhchap-ctrl-secret DHHC-1:01:NjRkNDBmZTc4NTA4MDM1MjQ0MjNkMGRlMDA5Y2U0N2JVcZES: 00:34:48.184 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:48.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:48.184 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:34:48.184 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.184 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:48.184 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.184 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:34:48.184 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:48.184 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:48.442 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:34:48.442 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:34:48.442 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:34:48.442 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:34:48.442 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:34:48.442 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:48.442 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key3 00:34:48.442 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.442 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:48.442 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.442 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:34:48.442 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:34:48.442 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:34:48.700 00:34:48.959 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:34:48.959 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:34:48.959 20:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:49.217 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.217 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:49.217 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.217 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:49.217 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.217 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:34:49.217 { 00:34:49.217 "auth": { 00:34:49.217 "dhgroup": "ffdhe4096", 00:34:49.217 "digest": "sha384", 00:34:49.217 "state": "completed" 00:34:49.217 }, 00:34:49.217 "cntlid": 79, 00:34:49.217 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:34:49.217 "listen_address": { 00:34:49.217 "adrfam": "IPv4", 00:34:49.217 "traddr": "10.0.0.3", 00:34:49.217 "trsvcid": "4420", 00:34:49.217 "trtype": "TCP" 00:34:49.217 }, 00:34:49.217 "peer_address": { 00:34:49.217 "adrfam": "IPv4", 00:34:49.217 "traddr": "10.0.0.1", 00:34:49.217 "trsvcid": "49454", 00:34:49.217 "trtype": "TCP" 00:34:49.217 }, 00:34:49.217 "qid": 0, 00:34:49.217 "state": "enabled", 00:34:49.217 "thread": "nvmf_tgt_poll_group_000" 00:34:49.217 } 00:34:49.217 ]' 00:34:49.217 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:34:49.217 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:34:49.217 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:34:49.217 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:34:49.218 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:34:49.218 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:49.218 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:49.218 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:49.476 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTA0YjI2NDliN2M2MTViNjUwZWMwOWEzNzZlNzQwYzIwZjYxOTc0NWQwODczNzliM2Y3ODU3MjJmZDIzNTAzNd3SHfc=: 00:34:49.476 20:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:03:YTA0YjI2NDliN2M2MTViNjUwZWMwOWEzNzZlNzQwYzIwZjYxOTc0NWQwODczNzliM2Y3ODU3MjJmZDIzNTAzNd3SHfc=: 00:34:50.419 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:50.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:50.419 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:34:50.419 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.419 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:50.419 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.419 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:34:50.419 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:34:50.419 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:50.419 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:50.419 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:34:50.419 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:34:50.419 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:34:50.419 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:34:50.419 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:34:50.419 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:50.419 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:50.419 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.419 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:50.419 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.419 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:50.419 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:50.419 20:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:50.986 00:34:50.986 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:34:50.986 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:34:50.986 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:51.246 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:51.246 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:51.246 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.246 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:51.246 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.246 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:34:51.246 { 00:34:51.246 "auth": { 00:34:51.246 "dhgroup": "ffdhe6144", 00:34:51.246 "digest": "sha384", 00:34:51.246 "state": "completed" 00:34:51.246 }, 00:34:51.246 "cntlid": 81, 00:34:51.246 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:34:51.246 "listen_address": { 00:34:51.246 "adrfam": "IPv4", 00:34:51.246 "traddr": "10.0.0.3", 00:34:51.246 "trsvcid": "4420", 00:34:51.246 "trtype": "TCP" 00:34:51.246 }, 00:34:51.246 "peer_address": { 00:34:51.246 "adrfam": "IPv4", 00:34:51.246 "traddr": "10.0.0.1", 00:34:51.246 "trsvcid": "49478", 00:34:51.246 "trtype": "TCP" 00:34:51.246 }, 00:34:51.246 "qid": 0, 00:34:51.246 "state": "enabled", 00:34:51.246 "thread": "nvmf_tgt_poll_group_000" 00:34:51.246 } 00:34:51.246 ]' 00:34:51.246 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:34:51.506 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:34:51.506 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:34:51.506 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:34:51.506 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:34:51.506 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:51.506 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:51.506 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:51.765 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzU0ZGViMmZiMWZkZDNmZjg2ZGMzMzMwMDFkMzAwZTJmZDYwNDRlYzAxNDJhNmMwV9F6jg==: --dhchap-ctrl-secret DHHC-1:03:YWVmZGM5ZjQ2MGUzNGRjMjY4OTdmNTliY2NhYmEwMGI3NTc0YjhjZjZlMTY0ODA3MTYxN2JhYzYwNDk0MjgxMJS15DY=: 00:34:51.765 20:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:00:NzU0ZGViMmZiMWZkZDNmZjg2ZGMzMzMwMDFkMzAwZTJmZDYwNDRlYzAxNDJhNmMwV9F6jg==: --dhchap-ctrl-secret DHHC-1:03:YWVmZGM5ZjQ2MGUzNGRjMjY4OTdmNTliY2NhYmEwMGI3NTc0YjhjZjZlMTY0ODA3MTYxN2JhYzYwNDk0MjgxMJS15DY=: 00:34:52.379 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:52.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:52.379 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:34:52.379 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.379 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:52.379 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.379 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:34:52.379 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:52.379 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:52.638 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:34:52.638 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:34:52.638 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:34:52.638 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:34:52.638 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:34:52.638 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:52.638 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:52.638 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.638 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:52.638 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.638 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:52.638 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:52.638 20:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:53.206 00:34:53.206 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:34:53.206 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:53.206 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:34:53.465 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:53.465 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:53.465 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.465 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:53.465 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.465 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:34:53.465 { 00:34:53.465 "auth": { 00:34:53.465 "dhgroup": "ffdhe6144", 00:34:53.465 "digest": "sha384", 00:34:53.465 "state": "completed" 00:34:53.465 }, 00:34:53.465 "cntlid": 83, 00:34:53.465 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:34:53.465 "listen_address": { 00:34:53.465 "adrfam": "IPv4", 00:34:53.465 "traddr": "10.0.0.3", 00:34:53.465 "trsvcid": "4420", 00:34:53.465 "trtype": "TCP" 00:34:53.465 }, 00:34:53.465 "peer_address": { 00:34:53.465 "adrfam": "IPv4", 00:34:53.465 "traddr": "10.0.0.1", 00:34:53.465 "trsvcid": "53800", 00:34:53.465 "trtype": "TCP" 00:34:53.465 }, 00:34:53.465 "qid": 0, 00:34:53.465 "state": "enabled", 00:34:53.465 "thread": "nvmf_tgt_poll_group_000" 00:34:53.465 } 00:34:53.465 ]' 00:34:53.465 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:34:53.465 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:34:53.465 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:34:53.465 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:34:53.465 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:34:53.724 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:53.724 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:53.724 20:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:53.982 20:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzVkMzE0ODljYTM3Y2VkYTUwOGE3YjZkNTM3NjE2NTMvSQcC: --dhchap-ctrl-secret DHHC-1:02:Mzg2YTViNDhhM2Q5ZWUyZWU0NzdiNDRiYWI5MDJhY2U2ZGZmN2MzNjBlZmFhODVhjx41NQ==: 00:34:53.982 20:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:01:MzVkMzE0ODljYTM3Y2VkYTUwOGE3YjZkNTM3NjE2NTMvSQcC: --dhchap-ctrl-secret DHHC-1:02:Mzg2YTViNDhhM2Q5ZWUyZWU0NzdiNDRiYWI5MDJhY2U2ZGZmN2MzNjBlZmFhODVhjx41NQ==: 00:34:54.549 20:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:54.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:54.549 20:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:34:54.549 20:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.549 20:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:54.549 20:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.549 20:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:34:54.549 20:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:54.549 20:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:54.808 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:34:54.808 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:34:54.808 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:34:54.808 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:34:54.808 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:34:54.808 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:54.808 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:54.808 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.808 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:54.808 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.808 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:54.808 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:54.808 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:55.374 00:34:55.374 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:34:55.374 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:34:55.374 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:55.633 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:55.633 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:55.633 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:55.633 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:55.633 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:55.633 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:34:55.633 { 00:34:55.633 "auth": { 00:34:55.633 "dhgroup": "ffdhe6144", 00:34:55.633 "digest": "sha384", 00:34:55.633 "state": "completed" 00:34:55.633 }, 00:34:55.633 "cntlid": 85, 00:34:55.633 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:34:55.633 "listen_address": { 00:34:55.633 "adrfam": "IPv4", 00:34:55.633 "traddr": "10.0.0.3", 00:34:55.633 "trsvcid": "4420", 00:34:55.633 "trtype": "TCP" 00:34:55.633 }, 00:34:55.633 "peer_address": { 00:34:55.633 "adrfam": "IPv4", 00:34:55.633 "traddr": "10.0.0.1", 00:34:55.633 "trsvcid": "53840", 00:34:55.633 "trtype": "TCP" 00:34:55.633 }, 00:34:55.633 "qid": 0, 00:34:55.633 "state": "enabled", 00:34:55.633 "thread": "nvmf_tgt_poll_group_000" 00:34:55.633 } 00:34:55.633 ]' 00:34:55.633 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:34:55.897 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:34:55.898 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:34:55.898 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:34:55.898 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:34:55.898 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:55.898 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:55.898 20:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:56.169 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: --dhchap-ctrl-secret DHHC-1:01:NjRkNDBmZTc4NTA4MDM1MjQ0MjNkMGRlMDA5Y2U0N2JVcZES: 00:34:56.169 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: --dhchap-ctrl-secret DHHC-1:01:NjRkNDBmZTc4NTA4MDM1MjQ0MjNkMGRlMDA5Y2U0N2JVcZES: 00:34:56.738 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:56.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:56.738 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:34:56.738 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.738 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:56.738 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:56.738 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:34:56.738 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:56.738 20:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:57.306 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:34:57.306 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:34:57.306 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:34:57.306 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:34:57.306 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:34:57.306 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:57.306 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key3 00:34:57.306 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.306 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:57.306 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.306 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:34:57.307 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:34:57.307 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:34:57.565 00:34:57.565 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:34:57.565 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:34:57.565 20:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:57.825 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.825 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:57.825 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.825 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:58.084 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.084 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:34:58.084 { 00:34:58.084 "auth": { 00:34:58.084 "dhgroup": "ffdhe6144", 00:34:58.084 "digest": "sha384", 00:34:58.084 "state": "completed" 00:34:58.084 }, 00:34:58.084 "cntlid": 87, 00:34:58.084 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:34:58.084 "listen_address": { 00:34:58.084 "adrfam": "IPv4", 00:34:58.084 "traddr": "10.0.0.3", 00:34:58.084 "trsvcid": "4420", 00:34:58.084 "trtype": "TCP" 00:34:58.084 }, 00:34:58.084 "peer_address": { 00:34:58.084 "adrfam": "IPv4", 00:34:58.084 "traddr": "10.0.0.1", 00:34:58.084 "trsvcid": "53858", 00:34:58.084 "trtype": "TCP" 00:34:58.084 }, 00:34:58.084 "qid": 0, 00:34:58.084 "state": "enabled", 00:34:58.084 "thread": "nvmf_tgt_poll_group_000" 00:34:58.084 } 00:34:58.084 ]' 00:34:58.084 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:34:58.084 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:34:58.084 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:34:58.084 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:34:58.084 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:34:58.084 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:58.084 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:58.084 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:58.343 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTA0YjI2NDliN2M2MTViNjUwZWMwOWEzNzZlNzQwYzIwZjYxOTc0NWQwODczNzliM2Y3ODU3MjJmZDIzNTAzNd3SHfc=: 00:34:58.343 20:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:03:YTA0YjI2NDliN2M2MTViNjUwZWMwOWEzNzZlNzQwYzIwZjYxOTc0NWQwODczNzliM2Y3ODU3MjJmZDIzNTAzNd3SHfc=: 00:34:59.280 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:59.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:59.280 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:34:59.280 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.280 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:59.280 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.280 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:34:59.280 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:34:59.280 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:59.280 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:59.539 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:34:59.539 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:34:59.539 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:34:59.539 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:34:59.539 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:34:59.539 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:59.539 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:59.539 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.539 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:59.539 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.539 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:59.539 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:59.539 20:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:00.477 00:35:00.477 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:00.477 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:00.477 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:00.478 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.478 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:00.478 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.478 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:00.478 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.478 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:00.478 { 00:35:00.478 "auth": { 00:35:00.478 "dhgroup": "ffdhe8192", 00:35:00.478 "digest": "sha384", 00:35:00.478 "state": "completed" 00:35:00.478 }, 00:35:00.478 "cntlid": 89, 00:35:00.478 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:35:00.478 "listen_address": { 00:35:00.478 "adrfam": "IPv4", 00:35:00.478 "traddr": "10.0.0.3", 00:35:00.478 "trsvcid": "4420", 00:35:00.478 "trtype": "TCP" 00:35:00.478 }, 00:35:00.478 "peer_address": { 00:35:00.478 "adrfam": "IPv4", 00:35:00.478 "traddr": "10.0.0.1", 00:35:00.478 "trsvcid": "53868", 00:35:00.478 "trtype": "TCP" 00:35:00.478 }, 00:35:00.478 "qid": 0, 00:35:00.478 "state": "enabled", 00:35:00.478 "thread": "nvmf_tgt_poll_group_000" 00:35:00.478 } 00:35:00.478 ]' 00:35:00.478 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:00.736 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:35:00.736 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:00.736 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:35:00.736 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:00.736 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:00.736 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:00.736 20:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:00.995 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzU0ZGViMmZiMWZkZDNmZjg2ZGMzMzMwMDFkMzAwZTJmZDYwNDRlYzAxNDJhNmMwV9F6jg==: --dhchap-ctrl-secret DHHC-1:03:YWVmZGM5ZjQ2MGUzNGRjMjY4OTdmNTliY2NhYmEwMGI3NTc0YjhjZjZlMTY0ODA3MTYxN2JhYzYwNDk0MjgxMJS15DY=: 00:35:00.995 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:00:NzU0ZGViMmZiMWZkZDNmZjg2ZGMzMzMwMDFkMzAwZTJmZDYwNDRlYzAxNDJhNmMwV9F6jg==: --dhchap-ctrl-secret DHHC-1:03:YWVmZGM5ZjQ2MGUzNGRjMjY4OTdmNTliY2NhYmEwMGI3NTc0YjhjZjZlMTY0ODA3MTYxN2JhYzYwNDk0MjgxMJS15DY=: 00:35:01.932 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:01.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:01.932 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:35:01.932 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.932 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:01.932 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.932 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:01.932 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:01.932 20:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:02.191 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:35:02.192 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:02.192 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:35:02.192 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:35:02.192 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:35:02.192 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:02.192 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:02.192 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.192 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:02.192 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.192 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:02.192 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:02.192 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:02.759 00:35:02.759 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:02.759 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:02.759 20:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:03.018 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:03.018 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:03.018 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.018 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:03.018 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.018 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:03.018 { 00:35:03.018 "auth": { 00:35:03.018 "dhgroup": "ffdhe8192", 00:35:03.018 "digest": "sha384", 00:35:03.018 "state": "completed" 00:35:03.018 }, 00:35:03.018 "cntlid": 91, 00:35:03.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:35:03.018 "listen_address": { 00:35:03.018 "adrfam": "IPv4", 00:35:03.018 "traddr": "10.0.0.3", 00:35:03.018 "trsvcid": "4420", 00:35:03.018 "trtype": "TCP" 00:35:03.018 }, 00:35:03.018 "peer_address": { 00:35:03.018 "adrfam": "IPv4", 00:35:03.018 "traddr": "10.0.0.1", 00:35:03.018 "trsvcid": "53910", 00:35:03.018 "trtype": "TCP" 00:35:03.018 }, 00:35:03.018 "qid": 0, 00:35:03.018 "state": "enabled", 00:35:03.018 "thread": "nvmf_tgt_poll_group_000" 00:35:03.018 } 00:35:03.018 ]' 00:35:03.018 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:03.018 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:35:03.277 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:03.277 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:35:03.277 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:03.277 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:03.277 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:03.277 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:03.536 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzVkMzE0ODljYTM3Y2VkYTUwOGE3YjZkNTM3NjE2NTMvSQcC: --dhchap-ctrl-secret DHHC-1:02:Mzg2YTViNDhhM2Q5ZWUyZWU0NzdiNDRiYWI5MDJhY2U2ZGZmN2MzNjBlZmFhODVhjx41NQ==: 00:35:03.536 20:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:01:MzVkMzE0ODljYTM3Y2VkYTUwOGE3YjZkNTM3NjE2NTMvSQcC: --dhchap-ctrl-secret DHHC-1:02:Mzg2YTViNDhhM2Q5ZWUyZWU0NzdiNDRiYWI5MDJhY2U2ZGZmN2MzNjBlZmFhODVhjx41NQ==: 00:35:04.103 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:04.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:04.103 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:35:04.103 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.103 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:04.103 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.103 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:04.103 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:04.103 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:04.362 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:35:04.362 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:04.362 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:35:04.362 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:35:04.362 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:35:04.362 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:04.362 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:04.362 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.362 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:04.362 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.362 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:04.362 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:04.362 20:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:04.952 00:35:04.952 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:04.952 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:04.952 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:05.211 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:05.211 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:05.211 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.211 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:05.211 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.211 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:05.211 { 00:35:05.211 "auth": { 00:35:05.211 "dhgroup": "ffdhe8192", 00:35:05.211 "digest": "sha384", 00:35:05.211 "state": "completed" 00:35:05.211 }, 00:35:05.211 "cntlid": 93, 00:35:05.211 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:35:05.211 "listen_address": { 00:35:05.211 "adrfam": "IPv4", 00:35:05.211 "traddr": "10.0.0.3", 00:35:05.211 "trsvcid": "4420", 00:35:05.211 "trtype": "TCP" 00:35:05.211 }, 00:35:05.211 "peer_address": { 00:35:05.211 "adrfam": "IPv4", 00:35:05.211 "traddr": "10.0.0.1", 00:35:05.211 "trsvcid": "40840", 00:35:05.211 "trtype": "TCP" 00:35:05.211 }, 00:35:05.211 "qid": 0, 00:35:05.211 "state": "enabled", 00:35:05.211 "thread": "nvmf_tgt_poll_group_000" 00:35:05.211 } 00:35:05.211 ]' 00:35:05.211 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:05.211 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:35:05.211 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:05.470 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:35:05.470 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:05.470 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:05.470 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:05.470 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:05.728 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: --dhchap-ctrl-secret DHHC-1:01:NjRkNDBmZTc4NTA4MDM1MjQ0MjNkMGRlMDA5Y2U0N2JVcZES: 00:35:05.728 20:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: --dhchap-ctrl-secret DHHC-1:01:NjRkNDBmZTc4NTA4MDM1MjQ0MjNkMGRlMDA5Y2U0N2JVcZES: 00:35:06.295 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:06.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:06.295 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:35:06.295 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.295 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:06.295 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.296 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:06.296 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:06.296 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:06.556 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:35:06.556 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:06.556 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:35:06.556 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:35:06.556 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:35:06.556 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:06.556 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key3 00:35:06.556 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.556 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:06.556 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.556 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:35:06.556 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:35:06.556 20:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:35:07.124 00:35:07.384 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:07.384 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:07.384 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:07.643 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.643 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:07.643 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.643 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:07.643 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.643 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:07.643 { 00:35:07.643 "auth": { 00:35:07.643 "dhgroup": "ffdhe8192", 00:35:07.643 "digest": "sha384", 00:35:07.643 "state": "completed" 00:35:07.643 }, 00:35:07.643 "cntlid": 95, 00:35:07.643 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:35:07.643 "listen_address": { 00:35:07.643 "adrfam": "IPv4", 00:35:07.643 "traddr": "10.0.0.3", 00:35:07.643 "trsvcid": "4420", 00:35:07.643 "trtype": "TCP" 00:35:07.643 }, 00:35:07.643 "peer_address": { 00:35:07.643 "adrfam": "IPv4", 00:35:07.643 "traddr": "10.0.0.1", 00:35:07.643 "trsvcid": "40860", 00:35:07.643 "trtype": "TCP" 00:35:07.643 }, 00:35:07.643 "qid": 0, 00:35:07.644 "state": "enabled", 00:35:07.644 "thread": "nvmf_tgt_poll_group_000" 00:35:07.644 } 00:35:07.644 ]' 00:35:07.644 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:07.644 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:35:07.644 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:07.644 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:35:07.644 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:07.644 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:07.644 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:07.644 20:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:08.212 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTA0YjI2NDliN2M2MTViNjUwZWMwOWEzNzZlNzQwYzIwZjYxOTc0NWQwODczNzliM2Y3ODU3MjJmZDIzNTAzNd3SHfc=: 00:35:08.212 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:03:YTA0YjI2NDliN2M2MTViNjUwZWMwOWEzNzZlNzQwYzIwZjYxOTc0NWQwODczNzliM2Y3ODU3MjJmZDIzNTAzNd3SHfc=: 00:35:08.780 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:08.780 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:08.780 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:35:08.780 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.780 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:08.780 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.780 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:35:08.780 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:35:08.780 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:08.780 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:35:08.780 20:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:35:09.038 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:35:09.038 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:09.038 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:35:09.038 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:35:09.038 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:35:09.038 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:09.038 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:09.038 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.038 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:09.038 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.038 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:09.038 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:09.038 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:09.297 00:35:09.297 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:09.297 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:09.297 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:09.891 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.891 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:09.891 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.891 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:09.891 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.891 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:09.891 { 00:35:09.891 "auth": { 00:35:09.891 "dhgroup": "null", 00:35:09.891 "digest": "sha512", 00:35:09.891 "state": "completed" 00:35:09.891 }, 00:35:09.891 "cntlid": 97, 00:35:09.891 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:35:09.891 "listen_address": { 00:35:09.891 "adrfam": "IPv4", 00:35:09.891 "traddr": "10.0.0.3", 00:35:09.891 "trsvcid": "4420", 00:35:09.891 "trtype": "TCP" 00:35:09.891 }, 00:35:09.891 "peer_address": { 00:35:09.891 "adrfam": "IPv4", 00:35:09.891 "traddr": "10.0.0.1", 00:35:09.891 "trsvcid": "40900", 00:35:09.891 "trtype": "TCP" 00:35:09.891 }, 00:35:09.891 "qid": 0, 00:35:09.891 "state": "enabled", 00:35:09.891 "thread": "nvmf_tgt_poll_group_000" 00:35:09.891 } 00:35:09.891 ]' 00:35:09.891 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:09.891 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:35:09.891 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:09.891 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:35:09.891 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:09.891 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:09.891 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:09.891 20:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:10.150 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzU0ZGViMmZiMWZkZDNmZjg2ZGMzMzMwMDFkMzAwZTJmZDYwNDRlYzAxNDJhNmMwV9F6jg==: --dhchap-ctrl-secret DHHC-1:03:YWVmZGM5ZjQ2MGUzNGRjMjY4OTdmNTliY2NhYmEwMGI3NTc0YjhjZjZlMTY0ODA3MTYxN2JhYzYwNDk0MjgxMJS15DY=: 00:35:10.150 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:00:NzU0ZGViMmZiMWZkZDNmZjg2ZGMzMzMwMDFkMzAwZTJmZDYwNDRlYzAxNDJhNmMwV9F6jg==: --dhchap-ctrl-secret DHHC-1:03:YWVmZGM5ZjQ2MGUzNGRjMjY4OTdmNTliY2NhYmEwMGI3NTc0YjhjZjZlMTY0ODA3MTYxN2JhYzYwNDk0MjgxMJS15DY=: 00:35:10.714 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:10.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:10.715 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:35:10.715 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.715 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:10.715 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.715 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:10.715 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:35:10.715 20:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:35:10.971 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:35:10.971 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:10.971 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:35:10.971 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:35:10.971 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:35:10.971 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:10.971 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:10.971 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.971 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:10.971 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.971 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:10.971 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:10.971 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:11.229 00:35:11.229 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:11.229 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:11.229 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:11.487 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:11.487 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:11.487 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:11.487 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:11.487 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:11.487 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:11.487 { 00:35:11.487 "auth": { 00:35:11.487 "dhgroup": "null", 00:35:11.487 "digest": "sha512", 00:35:11.487 "state": "completed" 00:35:11.487 }, 00:35:11.487 "cntlid": 99, 00:35:11.487 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:35:11.487 "listen_address": { 00:35:11.487 "adrfam": "IPv4", 00:35:11.487 "traddr": "10.0.0.3", 00:35:11.487 "trsvcid": "4420", 00:35:11.487 "trtype": "TCP" 00:35:11.487 }, 00:35:11.487 "peer_address": { 00:35:11.487 "adrfam": "IPv4", 00:35:11.487 "traddr": "10.0.0.1", 00:35:11.487 "trsvcid": "40932", 00:35:11.487 "trtype": "TCP" 00:35:11.487 }, 00:35:11.487 "qid": 0, 00:35:11.487 "state": "enabled", 00:35:11.487 "thread": "nvmf_tgt_poll_group_000" 00:35:11.487 } 00:35:11.487 ]' 00:35:11.487 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:11.746 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:35:11.746 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:11.746 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:35:11.746 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:11.746 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:11.746 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:11.746 20:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:12.004 20:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzVkMzE0ODljYTM3Y2VkYTUwOGE3YjZkNTM3NjE2NTMvSQcC: --dhchap-ctrl-secret DHHC-1:02:Mzg2YTViNDhhM2Q5ZWUyZWU0NzdiNDRiYWI5MDJhY2U2ZGZmN2MzNjBlZmFhODVhjx41NQ==: 00:35:12.004 20:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:01:MzVkMzE0ODljYTM3Y2VkYTUwOGE3YjZkNTM3NjE2NTMvSQcC: --dhchap-ctrl-secret DHHC-1:02:Mzg2YTViNDhhM2Q5ZWUyZWU0NzdiNDRiYWI5MDJhY2U2ZGZmN2MzNjBlZmFhODVhjx41NQ==: 00:35:12.570 20:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:12.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:12.570 20:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:35:12.570 20:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.570 20:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:12.570 20:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.570 20:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:12.570 20:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:35:12.570 20:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:35:13.136 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:35:13.136 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:13.136 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:35:13.136 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:35:13.136 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:35:13.136 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:13.136 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:13.136 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.136 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:13.136 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.136 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:13.136 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:13.136 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:13.394 00:35:13.394 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:13.394 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:13.394 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:13.652 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:13.652 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:13.652 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.652 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:13.652 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.652 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:13.652 { 00:35:13.652 "auth": { 00:35:13.652 "dhgroup": "null", 00:35:13.652 "digest": "sha512", 00:35:13.652 "state": "completed" 00:35:13.652 }, 00:35:13.652 "cntlid": 101, 00:35:13.652 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:35:13.652 "listen_address": { 00:35:13.652 "adrfam": "IPv4", 00:35:13.652 "traddr": "10.0.0.3", 00:35:13.652 "trsvcid": "4420", 00:35:13.652 "trtype": "TCP" 00:35:13.652 }, 00:35:13.652 "peer_address": { 00:35:13.652 "adrfam": "IPv4", 00:35:13.652 "traddr": "10.0.0.1", 00:35:13.652 "trsvcid": "41818", 00:35:13.652 "trtype": "TCP" 00:35:13.652 }, 00:35:13.652 "qid": 0, 00:35:13.652 "state": "enabled", 00:35:13.652 "thread": "nvmf_tgt_poll_group_000" 00:35:13.652 } 00:35:13.652 ]' 00:35:13.652 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:13.652 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:35:13.652 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:13.652 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:35:13.652 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:13.652 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:13.652 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:13.652 20:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:14.216 20:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: --dhchap-ctrl-secret DHHC-1:01:NjRkNDBmZTc4NTA4MDM1MjQ0MjNkMGRlMDA5Y2U0N2JVcZES: 00:35:14.216 20:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: --dhchap-ctrl-secret DHHC-1:01:NjRkNDBmZTc4NTA4MDM1MjQ0MjNkMGRlMDA5Y2U0N2JVcZES: 00:35:14.781 20:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:14.781 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:14.781 20:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:35:14.781 20:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:14.781 20:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:14.781 20:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:14.781 20:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:14.781 20:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:35:14.781 20:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:35:15.039 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:35:15.039 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:15.039 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:35:15.039 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:35:15.039 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:35:15.039 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:15.039 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key3 00:35:15.039 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.039 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:15.039 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.039 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:35:15.039 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:35:15.039 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:35:15.297 00:35:15.297 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:15.297 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:15.297 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:15.554 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:15.554 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:15.555 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.555 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:15.555 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.555 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:15.555 { 00:35:15.555 "auth": { 00:35:15.555 "dhgroup": "null", 00:35:15.555 "digest": "sha512", 00:35:15.555 "state": "completed" 00:35:15.555 }, 00:35:15.555 "cntlid": 103, 00:35:15.555 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:35:15.555 "listen_address": { 00:35:15.555 "adrfam": "IPv4", 00:35:15.555 "traddr": "10.0.0.3", 00:35:15.555 "trsvcid": "4420", 00:35:15.555 "trtype": "TCP" 00:35:15.555 }, 00:35:15.555 "peer_address": { 00:35:15.555 "adrfam": "IPv4", 00:35:15.555 "traddr": "10.0.0.1", 00:35:15.555 "trsvcid": "41840", 00:35:15.555 "trtype": "TCP" 00:35:15.555 }, 00:35:15.555 "qid": 0, 00:35:15.555 "state": "enabled", 00:35:15.555 "thread": "nvmf_tgt_poll_group_000" 00:35:15.555 } 00:35:15.555 ]' 00:35:15.555 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:15.812 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:35:15.812 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:15.812 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:35:15.812 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:15.812 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:15.812 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:15.812 20:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:16.071 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTA0YjI2NDliN2M2MTViNjUwZWMwOWEzNzZlNzQwYzIwZjYxOTc0NWQwODczNzliM2Y3ODU3MjJmZDIzNTAzNd3SHfc=: 00:35:16.071 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:03:YTA0YjI2NDliN2M2MTViNjUwZWMwOWEzNzZlNzQwYzIwZjYxOTc0NWQwODczNzliM2Y3ODU3MjJmZDIzNTAzNd3SHfc=: 00:35:16.637 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:16.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:16.637 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:35:16.637 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.637 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:16.637 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.637 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:35:16.637 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:16.637 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:16.637 20:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:16.896 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:35:16.896 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:16.896 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:35:16.896 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:35:16.896 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:35:16.896 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:16.896 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:16.896 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.896 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:16.896 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.896 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:16.896 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:16.896 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:17.154 00:35:17.154 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:17.154 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:17.154 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:17.412 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:17.412 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:17.412 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.412 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:17.670 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.670 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:17.670 { 00:35:17.670 "auth": { 00:35:17.670 "dhgroup": "ffdhe2048", 00:35:17.670 "digest": "sha512", 00:35:17.670 "state": "completed" 00:35:17.670 }, 00:35:17.670 "cntlid": 105, 00:35:17.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:35:17.670 "listen_address": { 00:35:17.670 "adrfam": "IPv4", 00:35:17.670 "traddr": "10.0.0.3", 00:35:17.670 "trsvcid": "4420", 00:35:17.670 "trtype": "TCP" 00:35:17.670 }, 00:35:17.670 "peer_address": { 00:35:17.670 "adrfam": "IPv4", 00:35:17.670 "traddr": "10.0.0.1", 00:35:17.670 "trsvcid": "41874", 00:35:17.670 "trtype": "TCP" 00:35:17.670 }, 00:35:17.670 "qid": 0, 00:35:17.670 "state": "enabled", 00:35:17.670 "thread": "nvmf_tgt_poll_group_000" 00:35:17.670 } 00:35:17.670 ]' 00:35:17.670 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:17.670 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:35:17.670 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:17.670 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:35:17.670 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:17.670 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:17.670 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:17.670 20:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:17.928 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzU0ZGViMmZiMWZkZDNmZjg2ZGMzMzMwMDFkMzAwZTJmZDYwNDRlYzAxNDJhNmMwV9F6jg==: --dhchap-ctrl-secret DHHC-1:03:YWVmZGM5ZjQ2MGUzNGRjMjY4OTdmNTliY2NhYmEwMGI3NTc0YjhjZjZlMTY0ODA3MTYxN2JhYzYwNDk0MjgxMJS15DY=: 00:35:17.928 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:00:NzU0ZGViMmZiMWZkZDNmZjg2ZGMzMzMwMDFkMzAwZTJmZDYwNDRlYzAxNDJhNmMwV9F6jg==: --dhchap-ctrl-secret DHHC-1:03:YWVmZGM5ZjQ2MGUzNGRjMjY4OTdmNTliY2NhYmEwMGI3NTc0YjhjZjZlMTY0ODA3MTYxN2JhYzYwNDk0MjgxMJS15DY=: 00:35:18.860 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:18.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:18.860 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:35:18.860 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.860 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:18.860 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.860 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:18.860 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:18.860 20:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:18.860 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:35:18.860 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:18.860 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:35:18.860 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:35:18.860 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:35:18.860 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:18.860 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:18.860 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.860 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:18.860 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.860 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:18.860 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:18.860 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:19.426 00:35:19.426 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:19.426 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:19.426 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:19.684 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:19.684 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:19.684 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.684 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:19.684 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.684 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:19.684 { 00:35:19.684 "auth": { 00:35:19.684 "dhgroup": "ffdhe2048", 00:35:19.684 "digest": "sha512", 00:35:19.684 "state": "completed" 00:35:19.684 }, 00:35:19.684 "cntlid": 107, 00:35:19.684 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:35:19.684 "listen_address": { 00:35:19.684 "adrfam": "IPv4", 00:35:19.684 "traddr": "10.0.0.3", 00:35:19.684 "trsvcid": "4420", 00:35:19.684 "trtype": "TCP" 00:35:19.684 }, 00:35:19.684 "peer_address": { 00:35:19.684 "adrfam": "IPv4", 00:35:19.684 "traddr": "10.0.0.1", 00:35:19.684 "trsvcid": "41906", 00:35:19.684 "trtype": "TCP" 00:35:19.684 }, 00:35:19.684 "qid": 0, 00:35:19.684 "state": "enabled", 00:35:19.684 "thread": "nvmf_tgt_poll_group_000" 00:35:19.684 } 00:35:19.684 ]' 00:35:19.684 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:19.684 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:35:19.684 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:19.684 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:35:19.684 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:19.684 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:19.684 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:19.684 20:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:19.941 20:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzVkMzE0ODljYTM3Y2VkYTUwOGE3YjZkNTM3NjE2NTMvSQcC: --dhchap-ctrl-secret DHHC-1:02:Mzg2YTViNDhhM2Q5ZWUyZWU0NzdiNDRiYWI5MDJhY2U2ZGZmN2MzNjBlZmFhODVhjx41NQ==: 00:35:19.941 20:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:01:MzVkMzE0ODljYTM3Y2VkYTUwOGE3YjZkNTM3NjE2NTMvSQcC: --dhchap-ctrl-secret DHHC-1:02:Mzg2YTViNDhhM2Q5ZWUyZWU0NzdiNDRiYWI5MDJhY2U2ZGZmN2MzNjBlZmFhODVhjx41NQ==: 00:35:20.874 20:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:20.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:20.874 20:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:35:20.874 20:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.874 20:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:20.874 20:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.874 20:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:20.874 20:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:20.874 20:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:20.874 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:35:20.874 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:20.874 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:35:20.874 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:35:20.874 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:35:20.874 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:20.874 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:20.874 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.874 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:20.874 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.874 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:20.874 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:20.874 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:21.441 00:35:21.441 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:21.441 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:21.441 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:21.700 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:21.700 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:21.700 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.700 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:21.700 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.700 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:21.700 { 00:35:21.700 "auth": { 00:35:21.700 "dhgroup": "ffdhe2048", 00:35:21.700 "digest": "sha512", 00:35:21.700 "state": "completed" 00:35:21.700 }, 00:35:21.700 "cntlid": 109, 00:35:21.700 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:35:21.701 "listen_address": { 00:35:21.701 "adrfam": "IPv4", 00:35:21.701 "traddr": "10.0.0.3", 00:35:21.701 "trsvcid": "4420", 00:35:21.701 "trtype": "TCP" 00:35:21.701 }, 00:35:21.701 "peer_address": { 00:35:21.701 "adrfam": "IPv4", 00:35:21.701 "traddr": "10.0.0.1", 00:35:21.701 "trsvcid": "41936", 00:35:21.701 "trtype": "TCP" 00:35:21.701 }, 00:35:21.701 "qid": 0, 00:35:21.701 "state": "enabled", 00:35:21.701 "thread": "nvmf_tgt_poll_group_000" 00:35:21.701 } 00:35:21.701 ]' 00:35:21.701 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:21.701 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:35:21.701 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:21.701 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:35:21.701 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:21.701 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:21.701 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:21.701 20:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:21.959 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: --dhchap-ctrl-secret DHHC-1:01:NjRkNDBmZTc4NTA4MDM1MjQ0MjNkMGRlMDA5Y2U0N2JVcZES: 00:35:21.959 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: --dhchap-ctrl-secret DHHC-1:01:NjRkNDBmZTc4NTA4MDM1MjQ0MjNkMGRlMDA5Y2U0N2JVcZES: 00:35:22.894 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:22.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:22.894 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:35:22.894 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.894 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:22.894 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.894 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:22.894 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:22.894 20:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:23.152 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:35:23.152 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:23.152 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:35:23.152 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:35:23.152 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:35:23.152 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:23.152 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key3 00:35:23.152 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.152 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:23.152 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.152 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:35:23.152 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:35:23.153 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:35:23.412 00:35:23.412 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:23.412 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:23.412 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:23.684 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:23.684 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:23.684 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.684 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:23.684 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.684 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:23.684 { 00:35:23.684 "auth": { 00:35:23.684 "dhgroup": "ffdhe2048", 00:35:23.684 "digest": "sha512", 00:35:23.684 "state": "completed" 00:35:23.684 }, 00:35:23.684 "cntlid": 111, 00:35:23.684 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:35:23.684 "listen_address": { 00:35:23.684 "adrfam": "IPv4", 00:35:23.684 "traddr": "10.0.0.3", 00:35:23.684 "trsvcid": "4420", 00:35:23.684 "trtype": "TCP" 00:35:23.684 }, 00:35:23.684 "peer_address": { 00:35:23.684 "adrfam": "IPv4", 00:35:23.684 "traddr": "10.0.0.1", 00:35:23.684 "trsvcid": "35348", 00:35:23.684 "trtype": "TCP" 00:35:23.684 }, 00:35:23.684 "qid": 0, 00:35:23.684 "state": "enabled", 00:35:23.684 "thread": "nvmf_tgt_poll_group_000" 00:35:23.684 } 00:35:23.684 ]' 00:35:23.684 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:23.684 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:35:23.684 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:23.684 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:35:23.684 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:23.684 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:23.684 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:23.684 20:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:24.253 20:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTA0YjI2NDliN2M2MTViNjUwZWMwOWEzNzZlNzQwYzIwZjYxOTc0NWQwODczNzliM2Y3ODU3MjJmZDIzNTAzNd3SHfc=: 00:35:24.253 20:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:03:YTA0YjI2NDliN2M2MTViNjUwZWMwOWEzNzZlNzQwYzIwZjYxOTc0NWQwODczNzliM2Y3ODU3MjJmZDIzNTAzNd3SHfc=: 00:35:24.819 20:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:24.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:24.819 20:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:35:24.819 20:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.819 20:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:24.819 20:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.819 20:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:35:24.819 20:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:24.819 20:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:24.819 20:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:25.078 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:35:25.078 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:25.078 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:35:25.078 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:35:25.078 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:35:25.078 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:25.078 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:25.078 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.078 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:25.078 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.078 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:25.078 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:25.078 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:25.336 00:35:25.336 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:25.336 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:25.336 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:25.905 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:25.905 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:25.905 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.905 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:25.905 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.905 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:25.905 { 00:35:25.905 "auth": { 00:35:25.905 "dhgroup": "ffdhe3072", 00:35:25.905 "digest": "sha512", 00:35:25.905 "state": "completed" 00:35:25.905 }, 00:35:25.905 "cntlid": 113, 00:35:25.905 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:35:25.905 "listen_address": { 00:35:25.905 "adrfam": "IPv4", 00:35:25.905 "traddr": "10.0.0.3", 00:35:25.905 "trsvcid": "4420", 00:35:25.905 "trtype": "TCP" 00:35:25.905 }, 00:35:25.905 "peer_address": { 00:35:25.905 "adrfam": "IPv4", 00:35:25.905 "traddr": "10.0.0.1", 00:35:25.905 "trsvcid": "35362", 00:35:25.905 "trtype": "TCP" 00:35:25.905 }, 00:35:25.905 "qid": 0, 00:35:25.906 "state": "enabled", 00:35:25.906 "thread": "nvmf_tgt_poll_group_000" 00:35:25.906 } 00:35:25.906 ]' 00:35:25.906 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:25.906 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:35:25.906 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:25.906 20:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:35:25.906 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:25.906 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:25.906 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:25.906 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:26.165 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzU0ZGViMmZiMWZkZDNmZjg2ZGMzMzMwMDFkMzAwZTJmZDYwNDRlYzAxNDJhNmMwV9F6jg==: --dhchap-ctrl-secret DHHC-1:03:YWVmZGM5ZjQ2MGUzNGRjMjY4OTdmNTliY2NhYmEwMGI3NTc0YjhjZjZlMTY0ODA3MTYxN2JhYzYwNDk0MjgxMJS15DY=: 00:35:26.165 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:00:NzU0ZGViMmZiMWZkZDNmZjg2ZGMzMzMwMDFkMzAwZTJmZDYwNDRlYzAxNDJhNmMwV9F6jg==: --dhchap-ctrl-secret DHHC-1:03:YWVmZGM5ZjQ2MGUzNGRjMjY4OTdmNTliY2NhYmEwMGI3NTc0YjhjZjZlMTY0ODA3MTYxN2JhYzYwNDk0MjgxMJS15DY=: 00:35:26.733 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:26.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:26.733 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:35:26.733 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.733 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:26.992 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.992 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:26.992 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:26.992 20:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:26.992 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:35:26.992 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:26.992 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:35:26.992 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:35:26.992 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:35:26.992 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:26.992 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:26.992 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.992 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:26.992 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.992 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:26.992 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:26.992 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:27.559 00:35:27.559 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:27.559 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:27.559 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:27.819 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:27.819 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:27.819 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.819 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:27.819 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.819 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:27.819 { 00:35:27.819 "auth": { 00:35:27.819 "dhgroup": "ffdhe3072", 00:35:27.819 "digest": "sha512", 00:35:27.819 "state": "completed" 00:35:27.819 }, 00:35:27.819 "cntlid": 115, 00:35:27.819 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:35:27.819 "listen_address": { 00:35:27.819 "adrfam": "IPv4", 00:35:27.819 "traddr": "10.0.0.3", 00:35:27.819 "trsvcid": "4420", 00:35:27.819 "trtype": "TCP" 00:35:27.819 }, 00:35:27.819 "peer_address": { 00:35:27.819 "adrfam": "IPv4", 00:35:27.819 "traddr": "10.0.0.1", 00:35:27.819 "trsvcid": "35388", 00:35:27.819 "trtype": "TCP" 00:35:27.819 }, 00:35:27.819 "qid": 0, 00:35:27.819 "state": "enabled", 00:35:27.819 "thread": "nvmf_tgt_poll_group_000" 00:35:27.819 } 00:35:27.819 ]' 00:35:27.819 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:27.819 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:35:27.819 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:27.819 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:35:27.819 20:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:27.819 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:27.819 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:27.819 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:28.386 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzVkMzE0ODljYTM3Y2VkYTUwOGE3YjZkNTM3NjE2NTMvSQcC: --dhchap-ctrl-secret DHHC-1:02:Mzg2YTViNDhhM2Q5ZWUyZWU0NzdiNDRiYWI5MDJhY2U2ZGZmN2MzNjBlZmFhODVhjx41NQ==: 00:35:28.386 20:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:01:MzVkMzE0ODljYTM3Y2VkYTUwOGE3YjZkNTM3NjE2NTMvSQcC: --dhchap-ctrl-secret DHHC-1:02:Mzg2YTViNDhhM2Q5ZWUyZWU0NzdiNDRiYWI5MDJhY2U2ZGZmN2MzNjBlZmFhODVhjx41NQ==: 00:35:28.953 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:28.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:28.953 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:35:28.953 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.953 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:28.953 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.953 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:28.953 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:28.953 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:29.212 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:35:29.212 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:29.212 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:35:29.212 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:35:29.212 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:35:29.212 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:29.212 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:29.212 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:29.212 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:29.212 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:29.212 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:29.212 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:29.212 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:29.471 00:35:29.471 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:29.471 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:29.471 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:29.729 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:29.729 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:29.729 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:29.729 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:29.988 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:29.988 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:29.988 { 00:35:29.988 "auth": { 00:35:29.988 "dhgroup": "ffdhe3072", 00:35:29.988 "digest": "sha512", 00:35:29.988 "state": "completed" 00:35:29.988 }, 00:35:29.988 "cntlid": 117, 00:35:29.988 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:35:29.988 "listen_address": { 00:35:29.988 "adrfam": "IPv4", 00:35:29.988 "traddr": "10.0.0.3", 00:35:29.988 "trsvcid": "4420", 00:35:29.988 "trtype": "TCP" 00:35:29.988 }, 00:35:29.988 "peer_address": { 00:35:29.988 "adrfam": "IPv4", 00:35:29.988 "traddr": "10.0.0.1", 00:35:29.988 "trsvcid": "35424", 00:35:29.988 "trtype": "TCP" 00:35:29.988 }, 00:35:29.988 "qid": 0, 00:35:29.988 "state": "enabled", 00:35:29.988 "thread": "nvmf_tgt_poll_group_000" 00:35:29.988 } 00:35:29.988 ]' 00:35:29.988 20:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:29.988 20:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:35:29.988 20:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:29.989 20:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:35:29.989 20:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:29.989 20:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:29.989 20:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:29.989 20:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:30.249 20:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: --dhchap-ctrl-secret DHHC-1:01:NjRkNDBmZTc4NTA4MDM1MjQ0MjNkMGRlMDA5Y2U0N2JVcZES: 00:35:30.249 20:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: --dhchap-ctrl-secret DHHC-1:01:NjRkNDBmZTc4NTA4MDM1MjQ0MjNkMGRlMDA5Y2U0N2JVcZES: 00:35:30.817 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:30.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:30.817 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:35:30.817 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.817 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:30.817 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.817 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:30.817 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:30.817 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:31.385 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:35:31.385 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:31.385 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:35:31.385 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:35:31.385 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:35:31.385 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:31.386 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key3 00:35:31.386 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.386 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:31.386 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.386 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:35:31.386 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:35:31.386 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:35:31.645 00:35:31.645 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:31.645 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:31.645 20:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:31.904 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:31.904 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:31.904 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.904 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:31.904 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.904 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:31.904 { 00:35:31.904 "auth": { 00:35:31.904 "dhgroup": "ffdhe3072", 00:35:31.904 "digest": "sha512", 00:35:31.904 "state": "completed" 00:35:31.904 }, 00:35:31.904 "cntlid": 119, 00:35:31.904 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:35:31.904 "listen_address": { 00:35:31.904 "adrfam": "IPv4", 00:35:31.904 "traddr": "10.0.0.3", 00:35:31.904 "trsvcid": "4420", 00:35:31.904 "trtype": "TCP" 00:35:31.904 }, 00:35:31.904 "peer_address": { 00:35:31.904 "adrfam": "IPv4", 00:35:31.904 "traddr": "10.0.0.1", 00:35:31.904 "trsvcid": "35444", 00:35:31.904 "trtype": "TCP" 00:35:31.904 }, 00:35:31.904 "qid": 0, 00:35:31.904 "state": "enabled", 00:35:31.904 "thread": "nvmf_tgt_poll_group_000" 00:35:31.904 } 00:35:31.904 ]' 00:35:31.904 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:31.904 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:35:31.904 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:32.163 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:35:32.163 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:32.164 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:32.164 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:32.164 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:32.422 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTA0YjI2NDliN2M2MTViNjUwZWMwOWEzNzZlNzQwYzIwZjYxOTc0NWQwODczNzliM2Y3ODU3MjJmZDIzNTAzNd3SHfc=: 00:35:32.422 20:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:03:YTA0YjI2NDliN2M2MTViNjUwZWMwOWEzNzZlNzQwYzIwZjYxOTc0NWQwODczNzliM2Y3ODU3MjJmZDIzNTAzNd3SHfc=: 00:35:32.990 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:32.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:32.990 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:35:32.990 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.990 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:32.990 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.990 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:35:32.990 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:32.990 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:32.990 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:33.249 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:35:33.249 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:33.249 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:35:33.249 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:35:33.249 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:35:33.249 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:33.249 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:33.249 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.249 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:33.249 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.249 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:33.249 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:33.249 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:33.816 00:35:33.816 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:33.816 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:33.816 20:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:34.075 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.075 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:34.075 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.075 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:34.075 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.075 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:34.075 { 00:35:34.075 "auth": { 00:35:34.075 "dhgroup": "ffdhe4096", 00:35:34.075 "digest": "sha512", 00:35:34.075 "state": "completed" 00:35:34.075 }, 00:35:34.075 "cntlid": 121, 00:35:34.075 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:35:34.075 "listen_address": { 00:35:34.075 "adrfam": "IPv4", 00:35:34.075 "traddr": "10.0.0.3", 00:35:34.075 "trsvcid": "4420", 00:35:34.075 "trtype": "TCP" 00:35:34.075 }, 00:35:34.075 "peer_address": { 00:35:34.075 "adrfam": "IPv4", 00:35:34.075 "traddr": "10.0.0.1", 00:35:34.075 "trsvcid": "38410", 00:35:34.075 "trtype": "TCP" 00:35:34.075 }, 00:35:34.075 "qid": 0, 00:35:34.075 "state": "enabled", 00:35:34.075 "thread": "nvmf_tgt_poll_group_000" 00:35:34.075 } 00:35:34.075 ]' 00:35:34.075 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:34.075 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:35:34.075 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:34.075 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:35:34.075 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:34.333 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:34.333 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:34.333 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:34.592 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzU0ZGViMmZiMWZkZDNmZjg2ZGMzMzMwMDFkMzAwZTJmZDYwNDRlYzAxNDJhNmMwV9F6jg==: --dhchap-ctrl-secret DHHC-1:03:YWVmZGM5ZjQ2MGUzNGRjMjY4OTdmNTliY2NhYmEwMGI3NTc0YjhjZjZlMTY0ODA3MTYxN2JhYzYwNDk0MjgxMJS15DY=: 00:35:34.592 20:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:00:NzU0ZGViMmZiMWZkZDNmZjg2ZGMzMzMwMDFkMzAwZTJmZDYwNDRlYzAxNDJhNmMwV9F6jg==: --dhchap-ctrl-secret DHHC-1:03:YWVmZGM5ZjQ2MGUzNGRjMjY4OTdmNTliY2NhYmEwMGI3NTc0YjhjZjZlMTY0ODA3MTYxN2JhYzYwNDk0MjgxMJS15DY=: 00:35:35.159 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:35.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:35.159 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:35:35.159 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.159 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:35.159 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.159 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:35.159 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:35.159 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:35.418 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:35:35.418 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:35.418 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:35:35.418 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:35:35.418 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:35:35.418 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:35.418 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:35.418 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.418 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:35.418 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.418 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:35.418 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:35.418 20:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:36.011 00:35:36.011 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:36.011 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:36.011 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:36.270 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:36.270 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:36.270 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.270 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:36.270 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.270 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:36.270 { 00:35:36.270 "auth": { 00:35:36.270 "dhgroup": "ffdhe4096", 00:35:36.270 "digest": "sha512", 00:35:36.270 "state": "completed" 00:35:36.270 }, 00:35:36.270 "cntlid": 123, 00:35:36.270 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:35:36.270 "listen_address": { 00:35:36.270 "adrfam": "IPv4", 00:35:36.270 "traddr": "10.0.0.3", 00:35:36.270 "trsvcid": "4420", 00:35:36.270 "trtype": "TCP" 00:35:36.270 }, 00:35:36.270 "peer_address": { 00:35:36.270 "adrfam": "IPv4", 00:35:36.270 "traddr": "10.0.0.1", 00:35:36.270 "trsvcid": "38438", 00:35:36.270 "trtype": "TCP" 00:35:36.270 }, 00:35:36.270 "qid": 0, 00:35:36.270 "state": "enabled", 00:35:36.270 "thread": "nvmf_tgt_poll_group_000" 00:35:36.270 } 00:35:36.270 ]' 00:35:36.270 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:36.270 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:35:36.270 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:36.270 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:35:36.270 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:36.270 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:36.270 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:36.270 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:36.529 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzVkMzE0ODljYTM3Y2VkYTUwOGE3YjZkNTM3NjE2NTMvSQcC: --dhchap-ctrl-secret DHHC-1:02:Mzg2YTViNDhhM2Q5ZWUyZWU0NzdiNDRiYWI5MDJhY2U2ZGZmN2MzNjBlZmFhODVhjx41NQ==: 00:35:36.529 20:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:01:MzVkMzE0ODljYTM3Y2VkYTUwOGE3YjZkNTM3NjE2NTMvSQcC: --dhchap-ctrl-secret DHHC-1:02:Mzg2YTViNDhhM2Q5ZWUyZWU0NzdiNDRiYWI5MDJhY2U2ZGZmN2MzNjBlZmFhODVhjx41NQ==: 00:35:37.466 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:37.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:37.466 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:35:37.466 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.466 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:37.466 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.466 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:37.466 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:37.466 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:37.725 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:35:37.725 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:37.725 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:35:37.725 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:35:37.725 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:35:37.725 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:37.725 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:37.725 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.725 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:37.725 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.725 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:37.725 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:37.725 20:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:37.984 00:35:37.984 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:37.984 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:37.984 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:38.243 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:38.243 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:38.243 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.243 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:38.243 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.243 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:38.243 { 00:35:38.243 "auth": { 00:35:38.243 "dhgroup": "ffdhe4096", 00:35:38.243 "digest": "sha512", 00:35:38.243 "state": "completed" 00:35:38.243 }, 00:35:38.243 "cntlid": 125, 00:35:38.243 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:35:38.243 "listen_address": { 00:35:38.243 "adrfam": "IPv4", 00:35:38.243 "traddr": "10.0.0.3", 00:35:38.243 "trsvcid": "4420", 00:35:38.243 "trtype": "TCP" 00:35:38.243 }, 00:35:38.243 "peer_address": { 00:35:38.243 "adrfam": "IPv4", 00:35:38.243 "traddr": "10.0.0.1", 00:35:38.243 "trsvcid": "38472", 00:35:38.243 "trtype": "TCP" 00:35:38.243 }, 00:35:38.243 "qid": 0, 00:35:38.243 "state": "enabled", 00:35:38.243 "thread": "nvmf_tgt_poll_group_000" 00:35:38.243 } 00:35:38.243 ]' 00:35:38.243 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:38.243 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:35:38.243 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:38.519 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:35:38.519 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:38.519 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:38.519 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:38.519 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:38.783 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: --dhchap-ctrl-secret DHHC-1:01:NjRkNDBmZTc4NTA4MDM1MjQ0MjNkMGRlMDA5Y2U0N2JVcZES: 00:35:38.783 20:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: --dhchap-ctrl-secret DHHC-1:01:NjRkNDBmZTc4NTA4MDM1MjQ0MjNkMGRlMDA5Y2U0N2JVcZES: 00:35:39.350 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:39.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:39.350 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:35:39.350 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.350 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:39.350 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.350 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:39.350 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:39.350 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:39.609 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:35:39.609 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:39.609 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:35:39.609 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:35:39.609 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:35:39.609 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:39.609 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key3 00:35:39.609 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.609 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:39.609 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.609 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:35:39.609 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:35:39.609 20:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:35:40.177 00:35:40.177 20:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:40.177 20:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:40.177 20:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:40.436 20:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.436 20:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:40.436 20:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.436 20:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:40.436 20:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.436 20:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:40.436 { 00:35:40.436 "auth": { 00:35:40.436 "dhgroup": "ffdhe4096", 00:35:40.436 "digest": "sha512", 00:35:40.436 "state": "completed" 00:35:40.436 }, 00:35:40.436 "cntlid": 127, 00:35:40.436 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:35:40.436 "listen_address": { 00:35:40.436 "adrfam": "IPv4", 00:35:40.436 "traddr": "10.0.0.3", 00:35:40.436 "trsvcid": "4420", 00:35:40.436 "trtype": "TCP" 00:35:40.436 }, 00:35:40.436 "peer_address": { 00:35:40.436 "adrfam": "IPv4", 00:35:40.436 "traddr": "10.0.0.1", 00:35:40.436 "trsvcid": "38502", 00:35:40.436 "trtype": "TCP" 00:35:40.436 }, 00:35:40.436 "qid": 0, 00:35:40.436 "state": "enabled", 00:35:40.436 "thread": "nvmf_tgt_poll_group_000" 00:35:40.436 } 00:35:40.436 ]' 00:35:40.436 20:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:40.436 20:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:35:40.436 20:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:40.436 20:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:35:40.436 20:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:40.436 20:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:40.436 20:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:40.436 20:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:40.695 20:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTA0YjI2NDliN2M2MTViNjUwZWMwOWEzNzZlNzQwYzIwZjYxOTc0NWQwODczNzliM2Y3ODU3MjJmZDIzNTAzNd3SHfc=: 00:35:40.695 20:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:03:YTA0YjI2NDliN2M2MTViNjUwZWMwOWEzNzZlNzQwYzIwZjYxOTc0NWQwODczNzliM2Y3ODU3MjJmZDIzNTAzNd3SHfc=: 00:35:41.261 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:41.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:41.519 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:35:41.519 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.519 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:41.519 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.519 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:35:41.519 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:41.519 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:41.519 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:41.778 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:35:41.778 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:41.778 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:35:41.778 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:35:41.778 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:35:41.778 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:41.778 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:41.778 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.778 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:41.778 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.778 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:41.778 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:41.778 20:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:42.037 00:35:42.296 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:42.296 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:42.296 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:42.556 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:42.556 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:42.556 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.556 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:42.556 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.556 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:42.556 { 00:35:42.556 "auth": { 00:35:42.556 "dhgroup": "ffdhe6144", 00:35:42.556 "digest": "sha512", 00:35:42.556 "state": "completed" 00:35:42.556 }, 00:35:42.556 "cntlid": 129, 00:35:42.556 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:35:42.556 "listen_address": { 00:35:42.556 "adrfam": "IPv4", 00:35:42.556 "traddr": "10.0.0.3", 00:35:42.556 "trsvcid": "4420", 00:35:42.556 "trtype": "TCP" 00:35:42.556 }, 00:35:42.556 "peer_address": { 00:35:42.556 "adrfam": "IPv4", 00:35:42.556 "traddr": "10.0.0.1", 00:35:42.556 "trsvcid": "38522", 00:35:42.556 "trtype": "TCP" 00:35:42.556 }, 00:35:42.556 "qid": 0, 00:35:42.556 "state": "enabled", 00:35:42.556 "thread": "nvmf_tgt_poll_group_000" 00:35:42.556 } 00:35:42.556 ]' 00:35:42.556 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:42.556 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:35:42.556 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:42.556 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:35:42.556 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:42.815 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:42.815 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:42.815 20:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:43.074 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzU0ZGViMmZiMWZkZDNmZjg2ZGMzMzMwMDFkMzAwZTJmZDYwNDRlYzAxNDJhNmMwV9F6jg==: --dhchap-ctrl-secret DHHC-1:03:YWVmZGM5ZjQ2MGUzNGRjMjY4OTdmNTliY2NhYmEwMGI3NTc0YjhjZjZlMTY0ODA3MTYxN2JhYzYwNDk0MjgxMJS15DY=: 00:35:43.074 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:00:NzU0ZGViMmZiMWZkZDNmZjg2ZGMzMzMwMDFkMzAwZTJmZDYwNDRlYzAxNDJhNmMwV9F6jg==: --dhchap-ctrl-secret DHHC-1:03:YWVmZGM5ZjQ2MGUzNGRjMjY4OTdmNTliY2NhYmEwMGI3NTc0YjhjZjZlMTY0ODA3MTYxN2JhYzYwNDk0MjgxMJS15DY=: 00:35:43.641 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:43.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:43.641 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:35:43.641 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.641 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:43.641 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.641 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:43.641 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:43.641 20:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:43.899 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:35:43.899 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:43.899 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:35:43.899 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:35:43.899 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:35:43.899 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:43.899 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:43.899 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.899 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:43.899 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.899 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:43.899 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:43.899 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:44.466 00:35:44.466 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:44.466 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:44.466 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:44.725 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:44.725 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:44.725 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.725 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:44.725 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.725 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:44.725 { 00:35:44.725 "auth": { 00:35:44.725 "dhgroup": "ffdhe6144", 00:35:44.725 "digest": "sha512", 00:35:44.725 "state": "completed" 00:35:44.725 }, 00:35:44.725 "cntlid": 131, 00:35:44.725 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:35:44.725 "listen_address": { 00:35:44.725 "adrfam": "IPv4", 00:35:44.725 "traddr": "10.0.0.3", 00:35:44.725 "trsvcid": "4420", 00:35:44.725 "trtype": "TCP" 00:35:44.725 }, 00:35:44.725 "peer_address": { 00:35:44.725 "adrfam": "IPv4", 00:35:44.725 "traddr": "10.0.0.1", 00:35:44.725 "trsvcid": "48006", 00:35:44.725 "trtype": "TCP" 00:35:44.725 }, 00:35:44.725 "qid": 0, 00:35:44.725 "state": "enabled", 00:35:44.725 "thread": "nvmf_tgt_poll_group_000" 00:35:44.725 } 00:35:44.725 ]' 00:35:44.725 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:44.725 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:35:44.725 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:44.983 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:35:44.983 20:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:44.983 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:44.983 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:44.983 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:45.242 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzVkMzE0ODljYTM3Y2VkYTUwOGE3YjZkNTM3NjE2NTMvSQcC: --dhchap-ctrl-secret DHHC-1:02:Mzg2YTViNDhhM2Q5ZWUyZWU0NzdiNDRiYWI5MDJhY2U2ZGZmN2MzNjBlZmFhODVhjx41NQ==: 00:35:45.242 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:01:MzVkMzE0ODljYTM3Y2VkYTUwOGE3YjZkNTM3NjE2NTMvSQcC: --dhchap-ctrl-secret DHHC-1:02:Mzg2YTViNDhhM2Q5ZWUyZWU0NzdiNDRiYWI5MDJhY2U2ZGZmN2MzNjBlZmFhODVhjx41NQ==: 00:35:45.808 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:45.808 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:45.808 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:35:45.808 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.808 20:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:45.808 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.808 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:45.808 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:45.808 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:46.375 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:35:46.375 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:46.375 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:35:46.375 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:35:46.375 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:35:46.375 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:46.375 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:46.375 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.375 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:46.375 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.375 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:46.375 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:46.375 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:46.634 00:35:46.634 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:46.634 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:46.634 20:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:46.892 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.892 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:46.892 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.892 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:46.892 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.892 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:46.892 { 00:35:46.892 "auth": { 00:35:46.892 "dhgroup": "ffdhe6144", 00:35:46.892 "digest": "sha512", 00:35:46.892 "state": "completed" 00:35:46.892 }, 00:35:46.892 "cntlid": 133, 00:35:46.892 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:35:46.892 "listen_address": { 00:35:46.892 "adrfam": "IPv4", 00:35:46.892 "traddr": "10.0.0.3", 00:35:46.892 "trsvcid": "4420", 00:35:46.892 "trtype": "TCP" 00:35:46.892 }, 00:35:46.892 "peer_address": { 00:35:46.892 "adrfam": "IPv4", 00:35:46.892 "traddr": "10.0.0.1", 00:35:46.892 "trsvcid": "48040", 00:35:46.892 "trtype": "TCP" 00:35:46.892 }, 00:35:46.892 "qid": 0, 00:35:46.892 "state": "enabled", 00:35:46.892 "thread": "nvmf_tgt_poll_group_000" 00:35:46.892 } 00:35:46.892 ]' 00:35:46.892 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:47.150 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:35:47.150 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:47.150 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:35:47.150 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:47.150 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:47.150 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:47.150 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:47.408 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: --dhchap-ctrl-secret DHHC-1:01:NjRkNDBmZTc4NTA4MDM1MjQ0MjNkMGRlMDA5Y2U0N2JVcZES: 00:35:47.409 20:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: --dhchap-ctrl-secret DHHC-1:01:NjRkNDBmZTc4NTA4MDM1MjQ0MjNkMGRlMDA5Y2U0N2JVcZES: 00:35:47.975 20:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:48.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:48.233 20:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:35:48.233 20:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.233 20:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:48.233 20:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.233 20:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:48.233 20:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:48.233 20:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:48.491 20:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:35:48.491 20:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:48.491 20:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:35:48.491 20:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:35:48.491 20:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:35:48.491 20:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:48.491 20:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key3 00:35:48.491 20:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.491 20:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:48.491 20:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.491 20:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:35:48.491 20:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:35:48.491 20:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:35:49.058 00:35:49.058 20:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:49.058 20:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:49.058 20:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:49.316 20:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.316 20:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:49.316 20:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.316 20:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:49.316 20:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.316 20:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:49.316 { 00:35:49.316 "auth": { 00:35:49.316 "dhgroup": "ffdhe6144", 00:35:49.316 "digest": "sha512", 00:35:49.316 "state": "completed" 00:35:49.316 }, 00:35:49.316 "cntlid": 135, 00:35:49.316 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:35:49.316 "listen_address": { 00:35:49.316 "adrfam": "IPv4", 00:35:49.316 "traddr": "10.0.0.3", 00:35:49.316 "trsvcid": "4420", 00:35:49.316 "trtype": "TCP" 00:35:49.316 }, 00:35:49.316 "peer_address": { 00:35:49.316 "adrfam": "IPv4", 00:35:49.316 "traddr": "10.0.0.1", 00:35:49.316 "trsvcid": "48062", 00:35:49.316 "trtype": "TCP" 00:35:49.316 }, 00:35:49.316 "qid": 0, 00:35:49.316 "state": "enabled", 00:35:49.316 "thread": "nvmf_tgt_poll_group_000" 00:35:49.316 } 00:35:49.316 ]' 00:35:49.316 20:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:49.316 20:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:35:49.316 20:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:49.316 20:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:35:49.316 20:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:49.316 20:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:49.316 20:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:49.316 20:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:49.575 20:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTA0YjI2NDliN2M2MTViNjUwZWMwOWEzNzZlNzQwYzIwZjYxOTc0NWQwODczNzliM2Y3ODU3MjJmZDIzNTAzNd3SHfc=: 00:35:49.575 20:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:03:YTA0YjI2NDliN2M2MTViNjUwZWMwOWEzNzZlNzQwYzIwZjYxOTc0NWQwODczNzliM2Y3ODU3MjJmZDIzNTAzNd3SHfc=: 00:35:50.509 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:50.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:50.509 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:35:50.509 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.509 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:50.509 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.509 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:35:50.509 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:50.509 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:50.509 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:50.768 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:35:50.768 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:50.768 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:35:50.768 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:35:50.768 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:35:50.768 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:50.768 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:50.768 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.768 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:50.768 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.768 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:50.768 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:50.768 20:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:51.334 00:35:51.334 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:51.334 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:51.334 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:51.900 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:51.900 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:51.900 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.900 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:51.900 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.900 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:51.900 { 00:35:51.900 "auth": { 00:35:51.900 "dhgroup": "ffdhe8192", 00:35:51.900 "digest": "sha512", 00:35:51.900 "state": "completed" 00:35:51.900 }, 00:35:51.900 "cntlid": 137, 00:35:51.900 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:35:51.900 "listen_address": { 00:35:51.900 "adrfam": "IPv4", 00:35:51.900 "traddr": "10.0.0.3", 00:35:51.900 "trsvcid": "4420", 00:35:51.900 "trtype": "TCP" 00:35:51.900 }, 00:35:51.900 "peer_address": { 00:35:51.900 "adrfam": "IPv4", 00:35:51.900 "traddr": "10.0.0.1", 00:35:51.900 "trsvcid": "48070", 00:35:51.900 "trtype": "TCP" 00:35:51.900 }, 00:35:51.900 "qid": 0, 00:35:51.900 "state": "enabled", 00:35:51.900 "thread": "nvmf_tgt_poll_group_000" 00:35:51.900 } 00:35:51.900 ]' 00:35:51.900 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:51.900 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:35:51.900 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:51.900 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:35:51.900 20:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:51.900 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:51.900 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:51.900 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:52.159 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzU0ZGViMmZiMWZkZDNmZjg2ZGMzMzMwMDFkMzAwZTJmZDYwNDRlYzAxNDJhNmMwV9F6jg==: --dhchap-ctrl-secret DHHC-1:03:YWVmZGM5ZjQ2MGUzNGRjMjY4OTdmNTliY2NhYmEwMGI3NTc0YjhjZjZlMTY0ODA3MTYxN2JhYzYwNDk0MjgxMJS15DY=: 00:35:52.159 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:00:NzU0ZGViMmZiMWZkZDNmZjg2ZGMzMzMwMDFkMzAwZTJmZDYwNDRlYzAxNDJhNmMwV9F6jg==: --dhchap-ctrl-secret DHHC-1:03:YWVmZGM5ZjQ2MGUzNGRjMjY4OTdmNTliY2NhYmEwMGI3NTc0YjhjZjZlMTY0ODA3MTYxN2JhYzYwNDk0MjgxMJS15DY=: 00:35:53.094 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:53.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:53.094 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:35:53.094 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.094 20:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:53.094 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.094 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:53.094 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:53.094 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:53.094 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:35:53.094 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:53.094 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:35:53.094 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:35:53.094 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:35:53.094 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:53.094 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:53.094 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.094 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:53.094 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.094 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:53.094 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:53.094 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:54.027 00:35:54.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:54.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:54.027 20:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:54.027 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:54.027 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:54.027 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.027 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:54.027 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.027 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:54.027 { 00:35:54.027 "auth": { 00:35:54.027 "dhgroup": "ffdhe8192", 00:35:54.027 "digest": "sha512", 00:35:54.027 "state": "completed" 00:35:54.027 }, 00:35:54.027 "cntlid": 139, 00:35:54.027 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:35:54.027 "listen_address": { 00:35:54.027 "adrfam": "IPv4", 00:35:54.027 "traddr": "10.0.0.3", 00:35:54.027 "trsvcid": "4420", 00:35:54.027 "trtype": "TCP" 00:35:54.027 }, 00:35:54.027 "peer_address": { 00:35:54.027 "adrfam": "IPv4", 00:35:54.027 "traddr": "10.0.0.1", 00:35:54.027 "trsvcid": "46860", 00:35:54.027 "trtype": "TCP" 00:35:54.027 }, 00:35:54.027 "qid": 0, 00:35:54.027 "state": "enabled", 00:35:54.027 "thread": "nvmf_tgt_poll_group_000" 00:35:54.027 } 00:35:54.027 ]' 00:35:54.027 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:54.027 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:35:54.027 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:54.285 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:35:54.285 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:54.285 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:54.285 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:54.285 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:54.542 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzVkMzE0ODljYTM3Y2VkYTUwOGE3YjZkNTM3NjE2NTMvSQcC: --dhchap-ctrl-secret DHHC-1:02:Mzg2YTViNDhhM2Q5ZWUyZWU0NzdiNDRiYWI5MDJhY2U2ZGZmN2MzNjBlZmFhODVhjx41NQ==: 00:35:54.543 20:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:01:MzVkMzE0ODljYTM3Y2VkYTUwOGE3YjZkNTM3NjE2NTMvSQcC: --dhchap-ctrl-secret DHHC-1:02:Mzg2YTViNDhhM2Q5ZWUyZWU0NzdiNDRiYWI5MDJhY2U2ZGZmN2MzNjBlZmFhODVhjx41NQ==: 00:35:55.109 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:55.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:55.109 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:35:55.109 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.109 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:55.109 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.109 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:55.109 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:55.109 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:55.367 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:35:55.367 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:55.367 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:35:55.367 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:35:55.367 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:35:55.367 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:55.367 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:55.367 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.367 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:55.367 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.367 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:55.367 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:55.367 20:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:55.934 00:35:55.934 20:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:55.934 20:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:55.934 20:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:56.530 20:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.530 20:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:56.530 20:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.530 20:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:56.530 20:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.530 20:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:56.530 { 00:35:56.530 "auth": { 00:35:56.530 "dhgroup": "ffdhe8192", 00:35:56.530 "digest": "sha512", 00:35:56.530 "state": "completed" 00:35:56.530 }, 00:35:56.530 "cntlid": 141, 00:35:56.530 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:35:56.530 "listen_address": { 00:35:56.530 "adrfam": "IPv4", 00:35:56.530 "traddr": "10.0.0.3", 00:35:56.530 "trsvcid": "4420", 00:35:56.530 "trtype": "TCP" 00:35:56.530 }, 00:35:56.530 "peer_address": { 00:35:56.530 "adrfam": "IPv4", 00:35:56.530 "traddr": "10.0.0.1", 00:35:56.530 "trsvcid": "46890", 00:35:56.530 "trtype": "TCP" 00:35:56.530 }, 00:35:56.530 "qid": 0, 00:35:56.530 "state": "enabled", 00:35:56.530 "thread": "nvmf_tgt_poll_group_000" 00:35:56.530 } 00:35:56.530 ]' 00:35:56.530 20:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:56.530 20:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:35:56.530 20:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:56.530 20:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:35:56.530 20:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:56.530 20:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:56.530 20:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:56.530 20:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:56.789 20:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: --dhchap-ctrl-secret DHHC-1:01:NjRkNDBmZTc4NTA4MDM1MjQ0MjNkMGRlMDA5Y2U0N2JVcZES: 00:35:56.789 20:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: --dhchap-ctrl-secret DHHC-1:01:NjRkNDBmZTc4NTA4MDM1MjQ0MjNkMGRlMDA5Y2U0N2JVcZES: 00:35:57.726 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:57.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:57.726 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:35:57.726 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.726 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:57.726 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.726 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:57.726 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:57.726 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:57.726 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:35:57.726 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:57.726 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:35:57.726 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:35:57.726 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:35:57.726 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:57.726 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key3 00:35:57.726 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.726 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:57.726 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.726 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:35:57.726 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:35:57.726 20:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:35:58.294 00:35:58.553 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:58.553 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:58.553 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:58.812 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.812 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:58.812 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.812 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:58.812 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.812 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:58.812 { 00:35:58.812 "auth": { 00:35:58.812 "dhgroup": "ffdhe8192", 00:35:58.812 "digest": "sha512", 00:35:58.812 "state": "completed" 00:35:58.812 }, 00:35:58.812 "cntlid": 143, 00:35:58.812 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:35:58.812 "listen_address": { 00:35:58.812 "adrfam": "IPv4", 00:35:58.812 "traddr": "10.0.0.3", 00:35:58.812 "trsvcid": "4420", 00:35:58.812 "trtype": "TCP" 00:35:58.812 }, 00:35:58.812 "peer_address": { 00:35:58.812 "adrfam": "IPv4", 00:35:58.812 "traddr": "10.0.0.1", 00:35:58.812 "trsvcid": "46916", 00:35:58.812 "trtype": "TCP" 00:35:58.812 }, 00:35:58.812 "qid": 0, 00:35:58.812 "state": "enabled", 00:35:58.812 "thread": "nvmf_tgt_poll_group_000" 00:35:58.812 } 00:35:58.812 ]' 00:35:58.812 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:58.812 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:35:58.812 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:58.812 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:35:58.812 20:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:58.812 20:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:58.812 20:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:58.812 20:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:59.071 20:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTA0YjI2NDliN2M2MTViNjUwZWMwOWEzNzZlNzQwYzIwZjYxOTc0NWQwODczNzliM2Y3ODU3MjJmZDIzNTAzNd3SHfc=: 00:35:59.071 20:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:03:YTA0YjI2NDliN2M2MTViNjUwZWMwOWEzNzZlNzQwYzIwZjYxOTc0NWQwODczNzliM2Y3ODU3MjJmZDIzNTAzNd3SHfc=: 00:36:00.005 20:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:36:00.005 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:36:00.005 20:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:36:00.005 20:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.005 20:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:00.005 20:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.005 20:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:36:00.006 20:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:36:00.006 20:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:36:00.006 20:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:00.006 20:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:00.006 20:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:00.264 20:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:36:00.264 20:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:36:00.264 20:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:36:00.264 20:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:36:00.264 20:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:36:00.264 20:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:36:00.264 20:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:00.264 20:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.264 20:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:00.264 20:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.264 20:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:00.264 20:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:00.264 20:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:00.829 00:36:00.829 20:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:36:00.829 20:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:00.829 20:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:36:01.088 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:01.088 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:36:01.088 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.088 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:01.088 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.088 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:36:01.088 { 00:36:01.088 "auth": { 00:36:01.088 "dhgroup": "ffdhe8192", 00:36:01.088 "digest": "sha512", 00:36:01.088 "state": "completed" 00:36:01.088 }, 00:36:01.088 "cntlid": 145, 00:36:01.088 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:36:01.088 "listen_address": { 00:36:01.088 "adrfam": "IPv4", 00:36:01.088 "traddr": "10.0.0.3", 00:36:01.088 "trsvcid": "4420", 00:36:01.088 "trtype": "TCP" 00:36:01.088 }, 00:36:01.088 "peer_address": { 00:36:01.088 "adrfam": "IPv4", 00:36:01.088 "traddr": "10.0.0.1", 00:36:01.088 "trsvcid": "46946", 00:36:01.088 "trtype": "TCP" 00:36:01.088 }, 00:36:01.088 "qid": 0, 00:36:01.088 "state": "enabled", 00:36:01.088 "thread": "nvmf_tgt_poll_group_000" 00:36:01.088 } 00:36:01.088 ]' 00:36:01.088 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:36:01.088 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:36:01.088 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:36:01.346 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:36:01.346 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:36:01.346 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:36:01.346 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:36:01.346 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:36:01.605 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzU0ZGViMmZiMWZkZDNmZjg2ZGMzMzMwMDFkMzAwZTJmZDYwNDRlYzAxNDJhNmMwV9F6jg==: --dhchap-ctrl-secret DHHC-1:03:YWVmZGM5ZjQ2MGUzNGRjMjY4OTdmNTliY2NhYmEwMGI3NTc0YjhjZjZlMTY0ODA3MTYxN2JhYzYwNDk0MjgxMJS15DY=: 00:36:01.605 20:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:00:NzU0ZGViMmZiMWZkZDNmZjg2ZGMzMzMwMDFkMzAwZTJmZDYwNDRlYzAxNDJhNmMwV9F6jg==: --dhchap-ctrl-secret DHHC-1:03:YWVmZGM5ZjQ2MGUzNGRjMjY4OTdmNTliY2NhYmEwMGI3NTc0YjhjZjZlMTY0ODA3MTYxN2JhYzYwNDk0MjgxMJS15DY=: 00:36:02.173 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:36:02.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:36:02.173 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:36:02.173 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.173 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:02.173 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.173 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key1 00:36:02.173 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.173 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:02.173 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.173 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:36:02.174 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:36:02.174 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:36:02.174 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:36:02.174 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:02.174 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:36:02.174 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:02.174 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:36:02.174 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:36:02.174 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:36:02.741 2024/10/01 20:34:57 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:36:02.741 request: 00:36:02.741 { 00:36:02.741 "method": "bdev_nvme_attach_controller", 00:36:02.741 "params": { 00:36:02.741 "name": "nvme0", 00:36:02.741 "trtype": "tcp", 00:36:02.741 "traddr": "10.0.0.3", 00:36:02.741 "adrfam": "ipv4", 00:36:02.741 "trsvcid": "4420", 00:36:02.741 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:36:02.741 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:36:02.741 "prchk_reftag": false, 00:36:02.741 "prchk_guard": false, 00:36:02.741 "hdgst": false, 00:36:02.741 "ddgst": false, 00:36:02.741 "dhchap_key": "key2", 00:36:02.741 "allow_unrecognized_csi": false 00:36:02.741 } 00:36:02.741 } 00:36:02.741 Got JSON-RPC error response 00:36:02.741 GoRPCClient: error on JSON-RPC call 00:36:02.741 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:36:02.741 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:02.741 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:02.742 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:02.742 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:36:02.742 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.742 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:02.742 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.742 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:02.742 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.742 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:02.742 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.742 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:02.742 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:36:02.742 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:02.742 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:36:02.742 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:02.742 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:36:02.742 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:02.742 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:02.742 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:02.742 20:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:03.310 2024/10/01 20:34:58 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:36:03.310 request: 00:36:03.310 { 00:36:03.310 "method": "bdev_nvme_attach_controller", 00:36:03.310 "params": { 00:36:03.310 "name": "nvme0", 00:36:03.310 "trtype": "tcp", 00:36:03.310 "traddr": "10.0.0.3", 00:36:03.310 "adrfam": "ipv4", 00:36:03.310 "trsvcid": "4420", 00:36:03.310 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:36:03.310 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:36:03.310 "prchk_reftag": false, 00:36:03.310 "prchk_guard": false, 00:36:03.310 "hdgst": false, 00:36:03.310 "ddgst": false, 00:36:03.310 "dhchap_key": "key1", 00:36:03.310 "dhchap_ctrlr_key": "ckey2", 00:36:03.310 "allow_unrecognized_csi": false 00:36:03.310 } 00:36:03.310 } 00:36:03.310 Got JSON-RPC error response 00:36:03.310 GoRPCClient: error on JSON-RPC call 00:36:03.569 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:36:03.569 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:03.569 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:03.569 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:03.569 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:36:03.569 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.569 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:03.569 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.569 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key1 00:36:03.569 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.569 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:03.569 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.569 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:03.569 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:36:03.569 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:03.569 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:36:03.569 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:03.569 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:36:03.569 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:03.569 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:03.569 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:03.569 20:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:04.136 2024/10/01 20:34:59 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:36:04.136 request: 00:36:04.136 { 00:36:04.136 "method": "bdev_nvme_attach_controller", 00:36:04.136 "params": { 00:36:04.136 "name": "nvme0", 00:36:04.136 "trtype": "tcp", 00:36:04.136 "traddr": "10.0.0.3", 00:36:04.136 "adrfam": "ipv4", 00:36:04.136 "trsvcid": "4420", 00:36:04.136 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:36:04.136 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:36:04.136 "prchk_reftag": false, 00:36:04.136 "prchk_guard": false, 00:36:04.136 "hdgst": false, 00:36:04.136 "ddgst": false, 00:36:04.136 "dhchap_key": "key1", 00:36:04.136 "dhchap_ctrlr_key": "ckey1", 00:36:04.136 "allow_unrecognized_csi": false 00:36:04.136 } 00:36:04.136 } 00:36:04.136 Got JSON-RPC error response 00:36:04.136 GoRPCClient: error on JSON-RPC call 00:36:04.136 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:36:04.136 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:04.136 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:04.136 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:04.136 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:36:04.136 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.136 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:04.136 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.136 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 76322 00:36:04.136 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 76322 ']' 00:36:04.136 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 76322 00:36:04.136 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:36:04.136 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:04.136 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76322 00:36:04.136 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:04.136 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:04.136 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76322' 00:36:04.136 killing process with pid 76322 00:36:04.136 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 76322 00:36:04.136 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 76322 00:36:04.395 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:36:04.395 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:04.395 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:04.395 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:04.395 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=81286 00:36:04.395 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:36:04.395 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 81286 00:36:04.395 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 81286 ']' 00:36:04.395 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:04.395 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:04.395 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:04.395 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:04.395 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:04.654 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:04.654 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:36:04.654 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:04.654 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:04.654 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:04.654 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:04.654 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:36:04.654 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 81286 00:36:04.654 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 81286 ']' 00:36:04.654 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:04.654 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:04.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:04.654 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:04.654 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:04.654 20:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:04.913 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:04.913 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:36:04.913 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:36:04.913 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.913 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:05.172 null0 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.YMB 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.UVv ]] 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.UVv 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.TCX 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.4mg ]] 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4mg 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.rxZ 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.C61 ]] 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.C61 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.rei 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key3 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:36:05.172 20:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:36:06.129 nvme0n1 00:36:06.129 20:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:36:06.130 20:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:06.130 20:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:36:06.388 20:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:06.388 20:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:36:06.388 20:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.388 20:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:06.388 20:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.388 20:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:36:06.388 { 00:36:06.388 "auth": { 00:36:06.388 "dhgroup": "ffdhe8192", 00:36:06.388 "digest": "sha512", 00:36:06.388 "state": "completed" 00:36:06.388 }, 00:36:06.388 "cntlid": 1, 00:36:06.388 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:36:06.388 "listen_address": { 00:36:06.388 "adrfam": "IPv4", 00:36:06.388 "traddr": "10.0.0.3", 00:36:06.388 "trsvcid": "4420", 00:36:06.388 "trtype": "TCP" 00:36:06.388 }, 00:36:06.388 "peer_address": { 00:36:06.388 "adrfam": "IPv4", 00:36:06.388 "traddr": "10.0.0.1", 00:36:06.388 "trsvcid": "50586", 00:36:06.388 "trtype": "TCP" 00:36:06.388 }, 00:36:06.388 "qid": 0, 00:36:06.388 "state": "enabled", 00:36:06.388 "thread": "nvmf_tgt_poll_group_000" 00:36:06.388 } 00:36:06.388 ]' 00:36:06.388 20:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:36:06.388 20:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:36:06.388 20:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:36:06.388 20:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:36:06.388 20:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:36:06.647 20:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:36:06.647 20:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:36:06.647 20:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:36:06.905 20:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTA0YjI2NDliN2M2MTViNjUwZWMwOWEzNzZlNzQwYzIwZjYxOTc0NWQwODczNzliM2Y3ODU3MjJmZDIzNTAzNd3SHfc=: 00:36:06.905 20:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:03:YTA0YjI2NDliN2M2MTViNjUwZWMwOWEzNzZlNzQwYzIwZjYxOTc0NWQwODczNzliM2Y3ODU3MjJmZDIzNTAzNd3SHfc=: 00:36:07.471 20:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:36:07.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:36:07.471 20:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:36:07.471 20:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.471 20:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:07.471 20:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.471 20:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key3 00:36:07.471 20:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.471 20:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:07.471 20:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.471 20:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:36:07.471 20:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:36:08.041 20:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:36:08.041 20:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:36:08.041 20:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:36:08.042 20:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:36:08.042 20:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:08.042 20:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:36:08.042 20:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:08.042 20:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:36:08.042 20:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:36:08.042 20:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:36:08.042 2024/10/01 20:35:03 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:36:08.042 request: 00:36:08.042 { 00:36:08.042 "method": "bdev_nvme_attach_controller", 00:36:08.042 "params": { 00:36:08.042 "name": "nvme0", 00:36:08.042 "trtype": "tcp", 00:36:08.042 "traddr": "10.0.0.3", 00:36:08.042 "adrfam": "ipv4", 00:36:08.042 "trsvcid": "4420", 00:36:08.042 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:36:08.042 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:36:08.042 "prchk_reftag": false, 00:36:08.042 "prchk_guard": false, 00:36:08.042 "hdgst": false, 00:36:08.042 "ddgst": false, 00:36:08.042 "dhchap_key": "key3", 00:36:08.042 "allow_unrecognized_csi": false 00:36:08.042 } 00:36:08.042 } 00:36:08.042 Got JSON-RPC error response 00:36:08.042 GoRPCClient: error on JSON-RPC call 00:36:08.042 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:36:08.042 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:08.042 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:08.042 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:08.042 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:36:08.042 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:36:08.042 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:36:08.042 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:36:08.608 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:36:08.608 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:36:08.608 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:36:08.608 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:36:08.608 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:08.608 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:36:08.608 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:08.609 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:36:08.609 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:36:08.609 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:36:08.868 2024/10/01 20:35:03 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:36:08.868 request: 00:36:08.868 { 00:36:08.868 "method": "bdev_nvme_attach_controller", 00:36:08.868 "params": { 00:36:08.868 "name": "nvme0", 00:36:08.868 "trtype": "tcp", 00:36:08.868 "traddr": "10.0.0.3", 00:36:08.868 "adrfam": "ipv4", 00:36:08.868 "trsvcid": "4420", 00:36:08.868 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:36:08.868 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:36:08.868 "prchk_reftag": false, 00:36:08.868 "prchk_guard": false, 00:36:08.868 "hdgst": false, 00:36:08.868 "ddgst": false, 00:36:08.868 "dhchap_key": "key3", 00:36:08.868 "allow_unrecognized_csi": false 00:36:08.868 } 00:36:08.868 } 00:36:08.868 Got JSON-RPC error response 00:36:08.868 GoRPCClient: error on JSON-RPC call 00:36:08.868 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:36:08.868 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:08.868 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:08.868 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:08.868 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:36:08.868 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:36:08.868 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:36:08.868 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:08.868 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:08.868 20:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:09.126 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:36:09.126 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.126 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:09.126 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.126 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:36:09.126 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.126 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:09.126 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.126 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:36:09.126 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:36:09.126 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:36:09.126 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:36:09.126 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:09.126 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:36:09.126 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:09.126 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:36:09.126 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:36:09.126 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:36:09.385 2024/10/01 20:35:04 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:36:09.385 request: 00:36:09.385 { 00:36:09.385 "method": "bdev_nvme_attach_controller", 00:36:09.385 "params": { 00:36:09.385 "name": "nvme0", 00:36:09.385 "trtype": "tcp", 00:36:09.385 "traddr": "10.0.0.3", 00:36:09.385 "adrfam": "ipv4", 00:36:09.385 "trsvcid": "4420", 00:36:09.385 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:36:09.385 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:36:09.385 "prchk_reftag": false, 00:36:09.385 "prchk_guard": false, 00:36:09.385 "hdgst": false, 00:36:09.385 "ddgst": false, 00:36:09.385 "dhchap_key": "key0", 00:36:09.385 "dhchap_ctrlr_key": "key1", 00:36:09.385 "allow_unrecognized_csi": false 00:36:09.385 } 00:36:09.385 } 00:36:09.385 Got JSON-RPC error response 00:36:09.385 GoRPCClient: error on JSON-RPC call 00:36:09.385 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:36:09.385 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:09.385 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:09.385 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:09.385 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:36:09.385 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:36:09.385 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:36:09.645 nvme0n1 00:36:09.903 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:36:09.903 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:09.903 20:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:36:09.903 20:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:09.903 20:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:36:09.903 20:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:36:10.162 20:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key1 00:36:10.162 20:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:10.162 20:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:10.162 20:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:10.162 20:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:36:10.162 20:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:36:10.162 20:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:36:11.098 nvme0n1 00:36:11.098 20:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:36:11.098 20:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:36:11.098 20:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:11.356 20:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:11.356 20:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key2 --dhchap-ctrlr-key key3 00:36:11.356 20:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.356 20:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:11.356 20:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.356 20:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:36:11.356 20:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:36:11.356 20:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:11.924 20:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:11.924 20:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: --dhchap-ctrl-secret DHHC-1:03:YTA0YjI2NDliN2M2MTViNjUwZWMwOWEzNzZlNzQwYzIwZjYxOTc0NWQwODczNzliM2Y3ODU3MjJmZDIzNTAzNd3SHfc=: 00:36:11.924 20:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid e1b29161-2f6e-4e88-9903-c5274a848703 -l 0 --dhchap-secret DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: --dhchap-ctrl-secret DHHC-1:03:YTA0YjI2NDliN2M2MTViNjUwZWMwOWEzNzZlNzQwYzIwZjYxOTc0NWQwODczNzliM2Y3ODU3MjJmZDIzNTAzNd3SHfc=: 00:36:12.491 20:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:36:12.491 20:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:36:12.491 20:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:36:12.491 20:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:36:12.491 20:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:36:12.491 20:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:36:12.491 20:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:36:12.491 20:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:36:12.491 20:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:36:12.750 20:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:36:12.750 20:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:36:12.750 20:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:36:12.750 20:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:36:12.750 20:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:12.750 20:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:36:12.750 20:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:12.750 20:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:36:12.750 20:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:36:12.750 20:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:36:13.379 2024/10/01 20:35:08 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:36:13.379 request: 00:36:13.379 { 00:36:13.379 "method": "bdev_nvme_attach_controller", 00:36:13.379 "params": { 00:36:13.379 "name": "nvme0", 00:36:13.379 "trtype": "tcp", 00:36:13.379 "traddr": "10.0.0.3", 00:36:13.379 "adrfam": "ipv4", 00:36:13.379 "trsvcid": "4420", 00:36:13.379 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:36:13.379 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703", 00:36:13.379 "prchk_reftag": false, 00:36:13.379 "prchk_guard": false, 00:36:13.379 "hdgst": false, 00:36:13.379 "ddgst": false, 00:36:13.379 "dhchap_key": "key1", 00:36:13.379 "allow_unrecognized_csi": false 00:36:13.379 } 00:36:13.379 } 00:36:13.379 Got JSON-RPC error response 00:36:13.379 GoRPCClient: error on JSON-RPC call 00:36:13.379 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:36:13.379 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:13.379 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:13.379 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:13.379 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:36:13.379 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:36:13.379 20:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:36:14.314 nvme0n1 00:36:14.314 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:36:14.314 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:36:14.315 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:14.573 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:14.573 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:36:14.573 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:36:14.832 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:36:14.832 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.832 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:14.832 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.832 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:36:14.832 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:36:14.832 20:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:36:15.091 nvme0n1 00:36:15.091 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:36:15.091 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:15.091 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:36:15.350 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:15.350 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:36:15.350 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:36:15.609 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key1 --dhchap-ctrlr-key key3 00:36:15.609 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.609 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:15.609 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.609 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MzVkMzE0ODljYTM3Y2VkYTUwOGE3YjZkNTM3NjE2NTMvSQcC: '' 2s 00:36:15.609 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:36:15.609 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:36:15.609 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MzVkMzE0ODljYTM3Y2VkYTUwOGE3YjZkNTM3NjE2NTMvSQcC: 00:36:15.609 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:36:15.609 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:36:15.609 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:36:15.609 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MzVkMzE0ODljYTM3Y2VkYTUwOGE3YjZkNTM3NjE2NTMvSQcC: ]] 00:36:15.609 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MzVkMzE0ODljYTM3Y2VkYTUwOGE3YjZkNTM3NjE2NTMvSQcC: 00:36:15.609 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:36:15.609 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:36:15.609 20:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:36:18.140 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:36:18.140 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:36:18.140 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:36:18.141 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:36:18.141 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:36:18.141 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:36:18.141 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:36:18.141 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key1 --dhchap-ctrlr-key key2 00:36:18.141 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:18.141 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:18.141 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:18.141 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: 2s 00:36:18.141 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:36:18.141 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:36:18.141 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:36:18.141 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: 00:36:18.141 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:36:18.141 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:36:18.141 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:36:18.141 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: ]] 00:36:18.141 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MTkwZWE0MzUxNjQ2MTRjNzk3OWM5YWYwMmQ0ZWZiNTljMTRlYzQwM2RmMjc0YTViWfXbIw==: 00:36:18.141 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:36:18.141 20:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:36:20.043 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:36:20.043 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:36:20.043 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:36:20.043 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:36:20.043 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:36:20.043 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:36:20.043 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:36:20.043 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:36:20.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:36:20.043 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key0 --dhchap-ctrlr-key key1 00:36:20.043 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.043 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:20.043 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.043 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:20.043 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:20.043 20:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:20.610 nvme0n1 00:36:20.610 20:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key2 --dhchap-ctrlr-key key3 00:36:20.610 20:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.610 20:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:20.610 20:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.610 20:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:36:20.610 20:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:36:21.177 20:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:36:21.177 20:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:36:21.177 20:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:21.743 20:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:21.743 20:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:36:21.743 20:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:21.743 20:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:21.743 20:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:21.743 20:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:36:21.743 20:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:36:22.000 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:36:22.000 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:36:22.000 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:22.257 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:22.257 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key2 --dhchap-ctrlr-key key3 00:36:22.257 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:22.257 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:22.257 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:22.257 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:36:22.257 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:36:22.257 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:36:22.257 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:36:22.257 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:22.257 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:36:22.257 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:22.257 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:36:22.257 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:36:22.822 2024/10/01 20:35:17 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key3 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:36:22.822 request: 00:36:22.822 { 00:36:22.822 "method": "bdev_nvme_set_keys", 00:36:22.822 "params": { 00:36:22.822 "name": "nvme0", 00:36:22.822 "dhchap_key": "key1", 00:36:22.822 "dhchap_ctrlr_key": "key3" 00:36:22.822 } 00:36:22.822 } 00:36:22.822 Got JSON-RPC error response 00:36:22.822 GoRPCClient: error on JSON-RPC call 00:36:22.822 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:36:22.822 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:22.822 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:22.822 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:22.822 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:36:22.822 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:36:22.822 20:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:23.080 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:36:23.080 20:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:36:24.014 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:36:24.014 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:36:24.014 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:24.276 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:36:24.276 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key0 --dhchap-ctrlr-key key1 00:36:24.276 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.276 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:24.276 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.276 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:24.276 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:24.276 20:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:25.213 nvme0n1 00:36:25.213 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --dhchap-key key2 --dhchap-ctrlr-key key3 00:36:25.213 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.213 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:25.213 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.213 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:36:25.213 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:36:25.213 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:36:25.213 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:36:25.213 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:25.213 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:36:25.213 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:25.213 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:36:25.213 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:36:25.779 2024/10/01 20:35:20 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key0 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:36:25.779 request: 00:36:25.779 { 00:36:25.779 "method": "bdev_nvme_set_keys", 00:36:25.779 "params": { 00:36:25.779 "name": "nvme0", 00:36:25.779 "dhchap_key": "key2", 00:36:25.779 "dhchap_ctrlr_key": "key0" 00:36:25.779 } 00:36:25.779 } 00:36:25.779 Got JSON-RPC error response 00:36:25.779 GoRPCClient: error on JSON-RPC call 00:36:25.779 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:36:25.779 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:25.780 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:25.780 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:25.780 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:36:25.780 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:25.780 20:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:36:26.345 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:36:26.345 20:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:36:27.281 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:36:27.281 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:27.281 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:36:27.540 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:36:27.540 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:36:27.540 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:36:27.540 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 76353 00:36:27.540 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 76353 ']' 00:36:27.540 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 76353 00:36:27.540 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:36:27.540 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:27.540 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76353 00:36:27.540 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:27.540 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:27.540 killing process with pid 76353 00:36:27.540 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76353' 00:36:27.540 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 76353 00:36:27.540 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 76353 00:36:27.798 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:36:27.798 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:27.798 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:36:27.798 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:27.798 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:36:27.798 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:27.798 20:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:27.798 rmmod nvme_tcp 00:36:27.798 rmmod nvme_fabrics 00:36:27.798 rmmod nvme_keyring 00:36:27.798 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:27.798 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:36:27.798 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:36:27.798 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 81286 ']' 00:36:27.798 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 81286 00:36:27.798 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 81286 ']' 00:36:27.798 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 81286 00:36:27.798 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:36:27.798 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:27.798 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81286 00:36:27.798 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:27.798 killing process with pid 81286 00:36:27.798 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:27.798 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81286' 00:36:27.798 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 81286 00:36:27.798 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 81286 00:36:28.057 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:28.057 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:28.057 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:28.057 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:36:28.057 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:36:28.057 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:28.057 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:36:28.057 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:28.057 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:36:28.057 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:36:28.057 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:36:28.057 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:36:28.057 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:36:28.316 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:36:28.316 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:36:28.316 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:36:28.316 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:36:28.316 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:36:28.316 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:36:28.316 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:36:28.316 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:28.316 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:28.316 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:36:28.316 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:28.316 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:28.316 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:28.316 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:36:28.316 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.YMB /tmp/spdk.key-sha256.TCX /tmp/spdk.key-sha384.rxZ /tmp/spdk.key-sha512.rei /tmp/spdk.key-sha512.UVv /tmp/spdk.key-sha384.4mg /tmp/spdk.key-sha256.C61 '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:36:28.316 00:36:28.316 real 3m16.407s 00:36:28.316 user 7m58.953s 00:36:28.316 sys 0m23.159s 00:36:28.316 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:28.316 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:28.316 ************************************ 00:36:28.316 END TEST nvmf_auth_target 00:36:28.316 ************************************ 00:36:28.316 20:35:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:36:28.316 20:35:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:36:28.316 20:35:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:28.316 20:35:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:28.316 20:35:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:36:28.575 ************************************ 00:36:28.575 START TEST nvmf_bdevio_no_huge 00:36:28.575 ************************************ 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:36:28.576 * Looking for test storage... 00:36:28.576 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:28.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:28.576 --rc genhtml_branch_coverage=1 00:36:28.576 --rc genhtml_function_coverage=1 00:36:28.576 --rc genhtml_legend=1 00:36:28.576 --rc geninfo_all_blocks=1 00:36:28.576 --rc geninfo_unexecuted_blocks=1 00:36:28.576 00:36:28.576 ' 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:28.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:28.576 --rc genhtml_branch_coverage=1 00:36:28.576 --rc genhtml_function_coverage=1 00:36:28.576 --rc genhtml_legend=1 00:36:28.576 --rc geninfo_all_blocks=1 00:36:28.576 --rc geninfo_unexecuted_blocks=1 00:36:28.576 00:36:28.576 ' 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:28.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:28.576 --rc genhtml_branch_coverage=1 00:36:28.576 --rc genhtml_function_coverage=1 00:36:28.576 --rc genhtml_legend=1 00:36:28.576 --rc geninfo_all_blocks=1 00:36:28.576 --rc geninfo_unexecuted_blocks=1 00:36:28.576 00:36:28.576 ' 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:28.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:28.576 --rc genhtml_branch_coverage=1 00:36:28.576 --rc genhtml_function_coverage=1 00:36:28.576 --rc genhtml_legend=1 00:36:28.576 --rc geninfo_all_blocks=1 00:36:28.576 --rc geninfo_unexecuted_blocks=1 00:36:28.576 00:36:28.576 ' 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:28.576 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:28.577 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@458 -- # nvmf_veth_init 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:36:28.577 Cannot find device "nvmf_init_br" 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:36:28.577 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:36:28.577 Cannot find device "nvmf_init_br2" 00:36:28.836 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:36:28.836 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:36:28.836 Cannot find device "nvmf_tgt_br" 00:36:28.836 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:36:28.836 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:36:28.836 Cannot find device "nvmf_tgt_br2" 00:36:28.836 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:36:28.836 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:36:28.836 Cannot find device "nvmf_init_br" 00:36:28.836 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:36:28.836 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:36:28.836 Cannot find device "nvmf_init_br2" 00:36:28.836 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:36:28.836 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:36:28.836 Cannot find device "nvmf_tgt_br" 00:36:28.836 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:36:28.836 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:36:28.836 Cannot find device "nvmf_tgt_br2" 00:36:28.836 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:36:28.836 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:36:28.836 Cannot find device "nvmf_br" 00:36:28.836 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:36:28.836 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:36:28.836 Cannot find device "nvmf_init_if" 00:36:28.836 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:36:28.836 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:36:28.836 Cannot find device "nvmf_init_if2" 00:36:28.836 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:36:28.836 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:28.836 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:28.836 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:36:28.836 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:28.836 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:28.836 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:36:28.836 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:36:28.836 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:36:28.836 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:36:28.836 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:36:28.836 20:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:36:28.836 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:36:28.836 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:36:28.836 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:36:28.836 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:36:28.836 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:36:28.836 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:36:28.836 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:36:28.836 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:36:28.836 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:36:28.836 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:36:28.836 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:36:29.095 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:36:29.095 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:36:29.095 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:36:29.095 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:36:29.095 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:36:29.095 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:36:29.095 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:36:29.095 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:36:29.095 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:36:29.095 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:36:29.095 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:36:29.095 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:36:29.096 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:36:29.096 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:36:29.096 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:29.096 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:36:29.096 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:36:29.096 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:29.096 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:36:29.096 00:36:29.096 --- 10.0.0.3 ping statistics --- 00:36:29.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:29.096 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:36:29.096 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:36:29.096 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:36:29.096 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.068 ms 00:36:29.096 00:36:29.096 --- 10.0.0.4 ping statistics --- 00:36:29.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:29.096 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:36:29.096 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:36:29.096 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:29.096 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:36:29.096 00:36:29.096 --- 10.0.0.1 ping statistics --- 00:36:29.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:29.096 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:36:29.096 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:36:29.096 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:29.096 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:36:29.096 00:36:29.096 --- 10.0.0.2 ping statistics --- 00:36:29.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:29.096 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:36:29.096 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:29.096 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # return 0 00:36:29.096 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:36:29.096 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:29.096 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:29.096 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:29.096 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:29.096 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:29.096 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:29.096 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:36:29.096 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:29.096 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:29.096 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:36:29.096 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=82136 00:36:29.096 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:36:29.096 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 82136 00:36:29.096 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 82136 ']' 00:36:29.096 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:29.096 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:29.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:29.096 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:29.096 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:29.096 20:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:36:29.096 [2024-10-01 20:35:24.296908] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:36:29.096 [2024-10-01 20:35:24.297010] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:36:29.355 [2024-10-01 20:35:24.446172] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:29.355 [2024-10-01 20:35:24.536271] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:29.355 [2024-10-01 20:35:24.536339] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:29.355 [2024-10-01 20:35:24.536364] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:29.355 [2024-10-01 20:35:24.536371] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:29.355 [2024-10-01 20:35:24.536377] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:29.355 [2024-10-01 20:35:24.536486] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:36:29.355 [2024-10-01 20:35:24.536620] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:36:29.355 [2024-10-01 20:35:24.537372] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:36:29.355 [2024-10-01 20:35:24.537379] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:36:30.290 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:30.290 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:36:30.290 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:30.290 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:30.290 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:36:30.290 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:30.290 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:30.290 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.290 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:36:30.290 [2024-10-01 20:35:25.273453] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:30.290 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.290 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:30.290 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.290 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:36:30.290 Malloc0 00:36:30.291 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.291 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:30.291 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.291 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:36:30.291 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.291 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:30.291 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.291 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:36:30.291 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.291 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:36:30.291 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.291 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:36:30.291 [2024-10-01 20:35:25.315097] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:36:30.291 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.291 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:36:30.291 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:36:30.291 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:36:30.291 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:36:30.291 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:30.291 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:30.291 { 00:36:30.291 "params": { 00:36:30.291 "name": "Nvme$subsystem", 00:36:30.291 "trtype": "$TEST_TRANSPORT", 00:36:30.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:30.291 "adrfam": "ipv4", 00:36:30.291 "trsvcid": "$NVMF_PORT", 00:36:30.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:30.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:30.291 "hdgst": ${hdgst:-false}, 00:36:30.291 "ddgst": ${ddgst:-false} 00:36:30.291 }, 00:36:30.291 "method": "bdev_nvme_attach_controller" 00:36:30.291 } 00:36:30.291 EOF 00:36:30.291 )") 00:36:30.291 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:36:30.291 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:36:30.291 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:36:30.291 20:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:30.291 "params": { 00:36:30.291 "name": "Nvme1", 00:36:30.291 "trtype": "tcp", 00:36:30.291 "traddr": "10.0.0.3", 00:36:30.291 "adrfam": "ipv4", 00:36:30.291 "trsvcid": "4420", 00:36:30.291 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:30.291 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:30.291 "hdgst": false, 00:36:30.291 "ddgst": false 00:36:30.291 }, 00:36:30.291 "method": "bdev_nvme_attach_controller" 00:36:30.291 }' 00:36:30.291 [2024-10-01 20:35:25.376727] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:36:30.291 [2024-10-01 20:35:25.376806] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid82190 ] 00:36:30.291 [2024-10-01 20:35:25.520992] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:30.549 [2024-10-01 20:35:25.701973] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:30.549 [2024-10-01 20:35:25.702145] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:36:30.549 [2024-10-01 20:35:25.702163] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:36:31.115 I/O targets: 00:36:31.115 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:36:31.115 00:36:31.115 00:36:31.115 CUnit - A unit testing framework for C - Version 2.1-3 00:36:31.115 http://cunit.sourceforge.net/ 00:36:31.115 00:36:31.115 00:36:31.115 Suite: bdevio tests on: Nvme1n1 00:36:31.115 Test: blockdev write read block ...passed 00:36:31.115 Test: blockdev write zeroes read block ...passed 00:36:31.115 Test: blockdev write zeroes read no split ...passed 00:36:31.115 Test: blockdev write zeroes read split ...passed 00:36:31.115 Test: blockdev write zeroes read split partial ...passed 00:36:31.115 Test: blockdev reset ...[2024-10-01 20:35:26.238100] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:31.115 [2024-10-01 20:35:26.238220] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x181f5f0 (9): Bad file descriptor 00:36:31.115 passed 00:36:31.115 Test: blockdev write read 8 blocks ...[2024-10-01 20:35:26.254916] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:36:31.115 passed 00:36:31.115 Test: blockdev write read size > 128k ...passed 00:36:31.115 Test: blockdev write read invalid size ...passed 00:36:31.115 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:36:31.115 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:36:31.115 Test: blockdev write read max offset ...passed 00:36:31.374 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:36:31.374 Test: blockdev writev readv 8 blocks ...passed 00:36:31.374 Test: blockdev writev readv 30 x 1block ...passed 00:36:31.374 Test: blockdev writev readv block ...passed 00:36:31.374 Test: blockdev writev readv size > 128k ...passed 00:36:31.374 Test: blockdev writev readv size > 128k in two iovs ...passed 00:36:31.374 Test: blockdev comparev and writev ...[2024-10-01 20:35:26.427547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:31.374 [2024-10-01 20:35:26.427614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:31.374 [2024-10-01 20:35:26.427634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:31.374 [2024-10-01 20:35:26.427645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:31.374 [2024-10-01 20:35:26.428199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:31.374 [2024-10-01 20:35:26.428259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:31.374 [2024-10-01 20:35:26.428292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:31.374 [2024-10-01 20:35:26.428302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:31.374 [2024-10-01 20:35:26.428648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:31.374 [2024-10-01 20:35:26.428676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:31.374 [2024-10-01 20:35:26.428692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:31.374 [2024-10-01 20:35:26.428702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:31.374 [2024-10-01 20:35:26.429154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:31.374 [2024-10-01 20:35:26.429181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:31.374 [2024-10-01 20:35:26.429198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:31.374 [2024-10-01 20:35:26.429223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:31.374 passed 00:36:31.374 Test: blockdev nvme passthru rw ...passed 00:36:31.374 Test: blockdev nvme passthru vendor specific ...[2024-10-01 20:35:26.512423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:31.374 [2024-10-01 20:35:26.512449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:31.374 [2024-10-01 20:35:26.512577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:31.374 [2024-10-01 20:35:26.512594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:31.374 [2024-10-01 20:35:26.512701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:31.374 [2024-10-01 20:35:26.512721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:31.374 [2024-10-01 20:35:26.512827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:31.374 [2024-10-01 20:35:26.512847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:31.374 passed 00:36:31.374 Test: blockdev nvme admin passthru ...passed 00:36:31.374 Test: blockdev copy ...passed 00:36:31.374 00:36:31.374 Run Summary: Type Total Ran Passed Failed Inactive 00:36:31.374 suites 1 1 n/a 0 0 00:36:31.374 tests 23 23 23 0 0 00:36:31.374 asserts 152 152 152 0 n/a 00:36:31.374 00:36:31.374 Elapsed time = 0.910 seconds 00:36:32.309 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:32.309 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.309 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:36:32.309 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.309 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:36:32.309 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:36:32.309 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:32.309 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:36:32.309 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:32.309 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:36:32.309 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:32.309 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:32.309 rmmod nvme_tcp 00:36:32.309 rmmod nvme_fabrics 00:36:32.309 rmmod nvme_keyring 00:36:32.309 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:32.309 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:36:32.309 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:36:32.309 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 82136 ']' 00:36:32.309 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 82136 00:36:32.309 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 82136 ']' 00:36:32.309 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 82136 00:36:32.309 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:36:32.309 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:32.309 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82136 00:36:32.309 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:36:32.309 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:36:32.309 killing process with pid 82136 00:36:32.309 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82136' 00:36:32.309 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 82136 00:36:32.309 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 82136 00:36:32.876 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:32.876 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:32.876 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:32.876 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:36:32.876 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:36:32.876 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:32.876 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:36:32.877 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:32.877 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:36:32.877 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:36:32.877 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:36:32.877 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:36:32.877 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:36:32.877 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:36:32.877 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:36:32.877 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:36:32.877 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:36:32.877 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:36:32.877 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:36:32.877 20:35:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:36:32.877 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:32.877 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:32.877 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:36:32.877 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:32.877 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:32.877 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:32.877 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:36:32.877 00:36:32.877 real 0m4.509s 00:36:32.877 user 0m15.961s 00:36:32.877 sys 0m1.997s 00:36:32.877 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:32.877 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:36:32.877 ************************************ 00:36:32.877 END TEST nvmf_bdevio_no_huge 00:36:32.877 ************************************ 00:36:32.877 20:35:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:36:32.877 20:35:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:32.877 20:35:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:32.877 20:35:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:36:33.137 ************************************ 00:36:33.137 START TEST nvmf_tls 00:36:33.137 ************************************ 00:36:33.137 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:36:33.137 * Looking for test storage... 00:36:33.137 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:36:33.137 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:33.137 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:33.137 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:36:33.137 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:33.137 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:33.137 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:33.137 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:33.137 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:36:33.137 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:36:33.137 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:36:33.137 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:36:33.137 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:36:33.137 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:36:33.137 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:36:33.137 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:33.137 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:36:33.137 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:36:33.137 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:33.137 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:33.137 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:36:33.137 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:36:33.137 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:33.137 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:36:33.137 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:36:33.137 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:36:33.137 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:36:33.137 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:33.137 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:36:33.137 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:36:33.137 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:33.137 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:33.137 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:36:33.137 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:33.137 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:33.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:33.137 --rc genhtml_branch_coverage=1 00:36:33.137 --rc genhtml_function_coverage=1 00:36:33.137 --rc genhtml_legend=1 00:36:33.137 --rc geninfo_all_blocks=1 00:36:33.137 --rc geninfo_unexecuted_blocks=1 00:36:33.137 00:36:33.137 ' 00:36:33.137 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:33.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:33.137 --rc genhtml_branch_coverage=1 00:36:33.137 --rc genhtml_function_coverage=1 00:36:33.137 --rc genhtml_legend=1 00:36:33.137 --rc geninfo_all_blocks=1 00:36:33.137 --rc geninfo_unexecuted_blocks=1 00:36:33.137 00:36:33.137 ' 00:36:33.137 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:33.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:33.137 --rc genhtml_branch_coverage=1 00:36:33.137 --rc genhtml_function_coverage=1 00:36:33.137 --rc genhtml_legend=1 00:36:33.137 --rc geninfo_all_blocks=1 00:36:33.137 --rc geninfo_unexecuted_blocks=1 00:36:33.137 00:36:33.137 ' 00:36:33.137 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:33.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:33.137 --rc genhtml_branch_coverage=1 00:36:33.137 --rc genhtml_function_coverage=1 00:36:33.138 --rc genhtml_legend=1 00:36:33.138 --rc geninfo_all_blocks=1 00:36:33.138 --rc geninfo_unexecuted_blocks=1 00:36:33.138 00:36:33.138 ' 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:33.138 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@458 -- # nvmf_veth_init 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:36:33.138 Cannot find device "nvmf_init_br" 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:36:33.138 Cannot find device "nvmf_init_br2" 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:36:33.138 Cannot find device "nvmf_tgt_br" 00:36:33.138 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:36:33.139 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:36:33.139 Cannot find device "nvmf_tgt_br2" 00:36:33.139 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:36:33.139 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:36:33.398 Cannot find device "nvmf_init_br" 00:36:33.398 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:36:33.398 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:36:33.398 Cannot find device "nvmf_init_br2" 00:36:33.398 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:36:33.398 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:36:33.398 Cannot find device "nvmf_tgt_br" 00:36:33.398 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:36:33.398 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:36:33.398 Cannot find device "nvmf_tgt_br2" 00:36:33.398 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:36:33.398 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:36:33.398 Cannot find device "nvmf_br" 00:36:33.398 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:36:33.398 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:36:33.398 Cannot find device "nvmf_init_if" 00:36:33.398 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:36:33.398 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:36:33.398 Cannot find device "nvmf_init_if2" 00:36:33.398 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:36:33.398 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:33.398 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:33.398 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:36:33.398 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:33.398 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:33.398 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:36:33.398 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:36:33.398 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:36:33.398 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:36:33.398 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:36:33.398 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:36:33.398 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:36:33.398 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:36:33.398 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:36:33.398 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:36:33.398 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:36:33.398 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:36:33.398 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:36:33.398 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:36:33.398 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:36:33.398 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:36:33.399 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:36:33.399 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:36:33.399 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:36:33.399 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:36:33.399 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:36:33.399 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:36:33.399 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:36:33.399 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:36:33.658 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:36:33.658 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:36:33.658 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:36:33.658 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:36:33.658 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:36:33.658 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:36:33.658 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:36:33.658 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:33.658 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:36:33.658 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:36:33.658 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:33.658 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:36:33.658 00:36:33.658 --- 10.0.0.3 ping statistics --- 00:36:33.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:33.658 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:36:33.658 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:36:33.658 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:36:33.658 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:36:33.658 00:36:33.658 --- 10.0.0.4 ping statistics --- 00:36:33.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:33.658 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:36:33.658 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:36:33.658 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:33.658 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:36:33.658 00:36:33.658 --- 10.0.0.1 ping statistics --- 00:36:33.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:33.658 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:36:33.658 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:36:33.658 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:33.658 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.038 ms 00:36:33.658 00:36:33.658 --- 10.0.0.2 ping statistics --- 00:36:33.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:33.658 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:36:33.658 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:33.658 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # return 0 00:36:33.658 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:36:33.658 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:33.658 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:33.658 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:33.658 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:33.658 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:33.658 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:33.658 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:36:33.658 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:33.658 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:33.658 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:36:33.658 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=82455 00:36:33.659 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:36:33.659 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 82455 00:36:33.659 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 82455 ']' 00:36:33.659 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:33.659 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:33.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:33.659 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:33.659 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:33.659 20:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:36:33.659 [2024-10-01 20:35:28.818070] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:36:33.659 [2024-10-01 20:35:28.818173] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:33.918 [2024-10-01 20:35:28.959802] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:33.918 [2024-10-01 20:35:29.032498] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:33.918 [2024-10-01 20:35:29.032566] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:33.918 [2024-10-01 20:35:29.032581] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:33.918 [2024-10-01 20:35:29.032591] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:33.918 [2024-10-01 20:35:29.032600] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:33.918 [2024-10-01 20:35:29.032633] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:33.918 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:33.918 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:36:33.918 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:33.918 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:33.918 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:36:33.918 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:33.918 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:36:33.918 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:36:34.177 true 00:36:34.435 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:36:34.435 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:36:34.435 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:36:34.435 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:36:34.435 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:36:34.695 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:36:34.695 20:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:36:34.955 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:36:34.955 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:36:34.955 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:36:35.213 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:36:35.213 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:36:35.470 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:36:35.470 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:36:35.471 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:36:35.471 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:36:35.775 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:36:35.775 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:36:35.775 20:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:36:36.057 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:36:36.057 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:36:36.327 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:36:36.327 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:36:36.327 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:36:36.586 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:36:36.586 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:36:36.846 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:36:36.846 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:36:36.846 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:36:36.846 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:36:36.846 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:36:36.846 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:36:36.846 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:36:36.846 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:36:36.846 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:36:36.846 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:36:36.846 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:36:36.846 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:36:36.846 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:36:36.846 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:36:36.846 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:36:36.846 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:36:36.846 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:36:36.846 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:36:36.846 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:36:36.846 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.9RHKYJu84C 00:36:36.846 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:36:36.846 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.0EgO7c1W1P 00:36:36.846 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:36:36.846 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:36:36.846 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.9RHKYJu84C 00:36:36.846 20:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.0EgO7c1W1P 00:36:36.846 20:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:36:37.105 20:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:36:37.675 20:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.9RHKYJu84C 00:36:37.675 20:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.9RHKYJu84C 00:36:37.675 20:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:36:37.675 [2024-10-01 20:35:32.900321] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:37.675 20:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:36:38.243 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:36:38.243 [2024-10-01 20:35:33.448408] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:38.243 [2024-10-01 20:35:33.448677] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:36:38.243 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:36:38.502 malloc0 00:36:38.502 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:36:38.762 20:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.9RHKYJu84C 00:36:39.021 20:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:36:39.280 20:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.9RHKYJu84C 00:36:51.488 Initializing NVMe Controllers 00:36:51.488 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:36:51.488 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:51.488 Initialization complete. Launching workers. 00:36:51.488 ======================================================== 00:36:51.488 Latency(us) 00:36:51.488 Device Information : IOPS MiB/s Average min max 00:36:51.488 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10990.09 42.93 5824.64 1486.49 9529.45 00:36:51.488 ======================================================== 00:36:51.488 Total : 10990.09 42.93 5824.64 1486.49 9529.45 00:36:51.488 00:36:51.488 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9RHKYJu84C 00:36:51.488 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:36:51.488 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:36:51.488 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:36:51.488 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.9RHKYJu84C 00:36:51.488 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:36:51.488 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=82806 00:36:51.488 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:36:51.488 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:36:51.488 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 82806 /var/tmp/bdevperf.sock 00:36:51.488 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 82806 ']' 00:36:51.488 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:51.488 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:51.488 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:51.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:51.488 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:51.488 20:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:36:51.488 [2024-10-01 20:35:44.589652] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:36:51.488 [2024-10-01 20:35:44.589764] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82806 ] 00:36:51.488 [2024-10-01 20:35:44.731258] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:51.488 [2024-10-01 20:35:44.800287] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:36:51.488 20:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:51.488 20:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:36:51.488 20:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9RHKYJu84C 00:36:51.488 20:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:36:51.488 [2024-10-01 20:35:46.007724] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:51.488 TLSTESTn1 00:36:51.488 20:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:36:51.488 Running I/O for 10 seconds... 00:37:01.396 4601.00 IOPS, 17.97 MiB/s 4565.00 IOPS, 17.83 MiB/s 4625.67 IOPS, 18.07 MiB/s 4654.75 IOPS, 18.18 MiB/s 4674.60 IOPS, 18.26 MiB/s 4695.17 IOPS, 18.34 MiB/s 4705.43 IOPS, 18.38 MiB/s 4709.12 IOPS, 18.40 MiB/s 4719.89 IOPS, 18.44 MiB/s 4720.60 IOPS, 18.44 MiB/s 00:37:01.396 Latency(us) 00:37:01.396 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:01.396 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:37:01.396 Verification LBA range: start 0x0 length 0x2000 00:37:01.396 TLSTESTn1 : 10.01 4726.53 18.46 0.00 0.00 27034.37 5242.88 20971.52 00:37:01.396 =================================================================================================================== 00:37:01.396 Total : 4726.53 18.46 0.00 0.00 27034.37 5242.88 20971.52 00:37:01.396 { 00:37:01.396 "results": [ 00:37:01.396 { 00:37:01.396 "job": "TLSTESTn1", 00:37:01.396 "core_mask": "0x4", 00:37:01.396 "workload": "verify", 00:37:01.396 "status": "finished", 00:37:01.396 "verify_range": { 00:37:01.396 "start": 0, 00:37:01.396 "length": 8192 00:37:01.396 }, 00:37:01.396 "queue_depth": 128, 00:37:01.396 "io_size": 4096, 00:37:01.396 "runtime": 10.014528, 00:37:01.396 "iops": 4726.533292432754, 00:37:01.396 "mibps": 18.463020673565445, 00:37:01.396 "io_failed": 0, 00:37:01.396 "io_timeout": 0, 00:37:01.396 "avg_latency_us": 27034.373141889166, 00:37:01.396 "min_latency_us": 5242.88, 00:37:01.396 "max_latency_us": 20971.52 00:37:01.396 } 00:37:01.396 ], 00:37:01.396 "core_count": 1 00:37:01.396 } 00:37:01.396 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:01.396 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 82806 00:37:01.396 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 82806 ']' 00:37:01.396 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 82806 00:37:01.396 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:37:01.396 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:01.396 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82806 00:37:01.396 killing process with pid 82806 00:37:01.396 Received shutdown signal, test time was about 10.000000 seconds 00:37:01.396 00:37:01.396 Latency(us) 00:37:01.396 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:01.396 =================================================================================================================== 00:37:01.396 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:01.396 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:37:01.396 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:37:01.396 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82806' 00:37:01.396 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 82806 00:37:01.396 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 82806 00:37:01.396 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0EgO7c1W1P 00:37:01.396 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:37:01.396 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0EgO7c1W1P 00:37:01.396 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:37:01.396 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:01.396 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:37:01.396 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:01.396 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0EgO7c1W1P 00:37:01.396 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:37:01.396 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:37:01.396 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:37:01.396 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.0EgO7c1W1P 00:37:01.396 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:01.396 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=82970 00:37:01.396 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:37:01.396 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:01.396 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 82970 /var/tmp/bdevperf.sock 00:37:01.396 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 82970 ']' 00:37:01.396 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:01.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:01.396 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:01.396 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:01.396 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:01.396 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:01.396 [2024-10-01 20:35:56.549684] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:37:01.396 [2024-10-01 20:35:56.549784] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82970 ] 00:37:01.682 [2024-10-01 20:35:56.679554] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:01.682 [2024-10-01 20:35:56.730029] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:37:01.682 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:01.682 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:37:01.682 20:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0EgO7c1W1P 00:37:01.957 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:37:02.215 [2024-10-01 20:35:57.340766] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:02.215 [2024-10-01 20:35:57.345808] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:02.215 [2024-10-01 20:35:57.346381] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21b5c20 (107): Transport endpoint is not connected 00:37:02.215 [2024-10-01 20:35:57.347368] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21b5c20 (9): Bad file descriptor 00:37:02.215 [2024-10-01 20:35:57.348365] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:02.215 [2024-10-01 20:35:57.348388] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:37:02.215 [2024-10-01 20:35:57.348397] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:37:02.215 [2024-10-01 20:35:57.348407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:02.215 2024/10/01 20:35:57 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:37:02.215 request: 00:37:02.215 { 00:37:02.215 "method": "bdev_nvme_attach_controller", 00:37:02.215 "params": { 00:37:02.215 "name": "TLSTEST", 00:37:02.215 "trtype": "tcp", 00:37:02.216 "traddr": "10.0.0.3", 00:37:02.216 "adrfam": "ipv4", 00:37:02.216 "trsvcid": "4420", 00:37:02.216 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:02.216 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:02.216 "prchk_reftag": false, 00:37:02.216 "prchk_guard": false, 00:37:02.216 "hdgst": false, 00:37:02.216 "ddgst": false, 00:37:02.216 "psk": "key0", 00:37:02.216 "allow_unrecognized_csi": false 00:37:02.216 } 00:37:02.216 } 00:37:02.216 Got JSON-RPC error response 00:37:02.216 GoRPCClient: error on JSON-RPC call 00:37:02.216 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 82970 00:37:02.216 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 82970 ']' 00:37:02.216 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 82970 00:37:02.216 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:37:02.216 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:02.216 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82970 00:37:02.216 killing process with pid 82970 00:37:02.216 Received shutdown signal, test time was about 10.000000 seconds 00:37:02.216 00:37:02.216 Latency(us) 00:37:02.216 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:02.216 =================================================================================================================== 00:37:02.216 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:02.216 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:37:02.216 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:37:02.216 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82970' 00:37:02.216 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 82970 00:37:02.216 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 82970 00:37:02.474 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:37:02.474 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:37:02.474 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:02.474 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:02.474 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:02.474 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9RHKYJu84C 00:37:02.474 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:37:02.474 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9RHKYJu84C 00:37:02.474 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:37:02.474 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:02.474 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:37:02.474 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:02.474 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9RHKYJu84C 00:37:02.474 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:37:02.474 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:37:02.474 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:37:02.474 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.9RHKYJu84C 00:37:02.474 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:02.474 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83006 00:37:02.474 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:02.474 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:37:02.474 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83006 /var/tmp/bdevperf.sock 00:37:02.474 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83006 ']' 00:37:02.475 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:02.475 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:02.475 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:02.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:02.475 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:02.475 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:02.475 [2024-10-01 20:35:57.648991] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:37:02.475 [2024-10-01 20:35:57.649621] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83006 ] 00:37:02.733 [2024-10-01 20:35:57.791619] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:02.733 [2024-10-01 20:35:57.841664] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:37:02.733 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:02.733 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:37:02.733 20:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9RHKYJu84C 00:37:02.991 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:37:03.249 [2024-10-01 20:35:58.392553] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:03.249 [2024-10-01 20:35:58.403477] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:37:03.249 [2024-10-01 20:35:58.403531] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:37:03.249 [2024-10-01 20:35:58.403594] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:03.249 [2024-10-01 20:35:58.404183] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1644c20 (107): Transport endpoint is not connected 00:37:03.249 [2024-10-01 20:35:58.405157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1644c20 (9): Bad file descriptor 00:37:03.249 [2024-10-01 20:35:58.406154] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:03.249 [2024-10-01 20:35:58.406194] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:37:03.249 [2024-10-01 20:35:58.406204] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:37:03.249 [2024-10-01 20:35:58.406214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:03.249 2024/10/01 20:35:58 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:37:03.249 request: 00:37:03.249 { 00:37:03.249 "method": "bdev_nvme_attach_controller", 00:37:03.249 "params": { 00:37:03.249 "name": "TLSTEST", 00:37:03.249 "trtype": "tcp", 00:37:03.249 "traddr": "10.0.0.3", 00:37:03.249 "adrfam": "ipv4", 00:37:03.249 "trsvcid": "4420", 00:37:03.249 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:03.249 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:37:03.249 "prchk_reftag": false, 00:37:03.249 "prchk_guard": false, 00:37:03.249 "hdgst": false, 00:37:03.249 "ddgst": false, 00:37:03.249 "psk": "key0", 00:37:03.249 "allow_unrecognized_csi": false 00:37:03.249 } 00:37:03.249 } 00:37:03.249 Got JSON-RPC error response 00:37:03.249 GoRPCClient: error on JSON-RPC call 00:37:03.249 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83006 00:37:03.249 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83006 ']' 00:37:03.249 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83006 00:37:03.249 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:37:03.249 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:03.249 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83006 00:37:03.249 killing process with pid 83006 00:37:03.249 Received shutdown signal, test time was about 10.000000 seconds 00:37:03.249 00:37:03.249 Latency(us) 00:37:03.249 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:03.249 =================================================================================================================== 00:37:03.249 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:03.249 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:37:03.249 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:37:03.249 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83006' 00:37:03.249 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83006 00:37:03.249 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83006 00:37:03.508 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:37:03.508 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:37:03.508 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:03.508 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:03.508 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:03.508 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9RHKYJu84C 00:37:03.508 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:37:03.508 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9RHKYJu84C 00:37:03.508 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:37:03.508 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:03.508 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:37:03.508 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:03.508 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9RHKYJu84C 00:37:03.508 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:37:03.508 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:37:03.508 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:37:03.508 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.9RHKYJu84C 00:37:03.508 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:03.508 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83044 00:37:03.508 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:03.508 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83044 /var/tmp/bdevperf.sock 00:37:03.508 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:37:03.508 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83044 ']' 00:37:03.508 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:03.508 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:03.508 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:03.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:03.508 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:03.508 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:03.508 [2024-10-01 20:35:58.688218] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:37:03.508 [2024-10-01 20:35:58.688320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83044 ] 00:37:03.767 [2024-10-01 20:35:58.819866] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:03.767 [2024-10-01 20:35:58.874955] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:37:03.767 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:03.767 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:37:03.767 20:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9RHKYJu84C 00:37:04.024 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:37:04.283 [2024-10-01 20:35:59.534982] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:04.542 [2024-10-01 20:35:59.543757] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:37:04.542 [2024-10-01 20:35:59.543798] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:37:04.542 [2024-10-01 20:35:59.543862] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:04.542 [2024-10-01 20:35:59.544725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b2c20 (107): Transport endpoint is not connected 00:37:04.542 [2024-10-01 20:35:59.545716] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b2c20 (9): Bad file descriptor 00:37:04.542 [2024-10-01 20:35:59.546713] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:37:04.542 [2024-10-01 20:35:59.546748] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:37:04.542 [2024-10-01 20:35:59.546773] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:37:04.542 [2024-10-01 20:35:59.546782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:37:04.542 2024/10/01 20:35:59 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:37:04.542 request: 00:37:04.542 { 00:37:04.542 "method": "bdev_nvme_attach_controller", 00:37:04.542 "params": { 00:37:04.542 "name": "TLSTEST", 00:37:04.542 "trtype": "tcp", 00:37:04.542 "traddr": "10.0.0.3", 00:37:04.542 "adrfam": "ipv4", 00:37:04.542 "trsvcid": "4420", 00:37:04.542 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:37:04.542 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:04.542 "prchk_reftag": false, 00:37:04.542 "prchk_guard": false, 00:37:04.542 "hdgst": false, 00:37:04.542 "ddgst": false, 00:37:04.543 "psk": "key0", 00:37:04.543 "allow_unrecognized_csi": false 00:37:04.543 } 00:37:04.543 } 00:37:04.543 Got JSON-RPC error response 00:37:04.543 GoRPCClient: error on JSON-RPC call 00:37:04.543 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83044 00:37:04.543 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83044 ']' 00:37:04.543 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83044 00:37:04.543 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:37:04.543 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:04.543 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83044 00:37:04.543 killing process with pid 83044 00:37:04.543 Received shutdown signal, test time was about 10.000000 seconds 00:37:04.543 00:37:04.543 Latency(us) 00:37:04.543 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:04.543 =================================================================================================================== 00:37:04.543 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:04.543 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:37:04.543 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:37:04.543 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83044' 00:37:04.543 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83044 00:37:04.543 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83044 00:37:04.543 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:37:04.543 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:37:04.543 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:04.543 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:04.543 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:04.543 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:37:04.543 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:37:04.543 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:37:04.543 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:37:04.543 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:04.543 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:37:04.543 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:04.543 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:37:04.543 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:37:04.543 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:37:04.543 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:37:04.543 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:37:04.543 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:04.543 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83083 00:37:04.543 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:04.543 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:37:04.543 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83083 /var/tmp/bdevperf.sock 00:37:04.543 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83083 ']' 00:37:04.543 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:04.543 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:04.543 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:04.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:04.543 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:04.543 20:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:04.802 [2024-10-01 20:35:59.844556] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:37:04.802 [2024-10-01 20:35:59.844673] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83083 ] 00:37:04.802 [2024-10-01 20:35:59.975915] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:04.802 [2024-10-01 20:36:00.031898] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:37:05.739 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:05.739 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:37:05.739 20:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:37:05.997 [2024-10-01 20:36:01.077397] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:37:05.997 [2024-10-01 20:36:01.077441] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:05.997 2024/10/01 20:36:01 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:37:05.997 request: 00:37:05.997 { 00:37:05.997 "method": "keyring_file_add_key", 00:37:05.997 "params": { 00:37:05.997 "name": "key0", 00:37:05.997 "path": "" 00:37:05.997 } 00:37:05.997 } 00:37:05.997 Got JSON-RPC error response 00:37:05.997 GoRPCClient: error on JSON-RPC call 00:37:05.997 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:37:06.256 [2024-10-01 20:36:01.309617] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:06.256 [2024-10-01 20:36:01.309691] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:37:06.256 2024/10/01 20:36:01 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:37:06.256 request: 00:37:06.256 { 00:37:06.256 "method": "bdev_nvme_attach_controller", 00:37:06.256 "params": { 00:37:06.256 "name": "TLSTEST", 00:37:06.256 "trtype": "tcp", 00:37:06.256 "traddr": "10.0.0.3", 00:37:06.256 "adrfam": "ipv4", 00:37:06.256 "trsvcid": "4420", 00:37:06.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:06.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:06.256 "prchk_reftag": false, 00:37:06.256 "prchk_guard": false, 00:37:06.256 "hdgst": false, 00:37:06.256 "ddgst": false, 00:37:06.256 "psk": "key0", 00:37:06.256 "allow_unrecognized_csi": false 00:37:06.256 } 00:37:06.256 } 00:37:06.256 Got JSON-RPC error response 00:37:06.256 GoRPCClient: error on JSON-RPC call 00:37:06.256 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83083 00:37:06.256 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83083 ']' 00:37:06.256 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83083 00:37:06.256 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:37:06.256 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:06.256 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83083 00:37:06.256 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:37:06.256 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:37:06.256 killing process with pid 83083 00:37:06.257 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83083' 00:37:06.257 Received shutdown signal, test time was about 10.000000 seconds 00:37:06.257 00:37:06.257 Latency(us) 00:37:06.257 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:06.257 =================================================================================================================== 00:37:06.257 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:06.257 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83083 00:37:06.257 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83083 00:37:06.515 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:37:06.516 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:37:06.516 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:06.516 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:06.516 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:06.516 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 82455 00:37:06.516 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 82455 ']' 00:37:06.516 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 82455 00:37:06.516 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:37:06.516 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:06.516 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82455 00:37:06.516 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:06.516 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:06.516 killing process with pid 82455 00:37:06.516 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82455' 00:37:06.516 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 82455 00:37:06.516 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 82455 00:37:06.775 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:37:06.775 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:37:06.775 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:37:06.775 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:37:06.775 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:37:06.775 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:37:06.775 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:37:06.775 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:37:06.775 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:37:06.775 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.gTlJJjye3R 00:37:06.775 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:37:06.775 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.gTlJJjye3R 00:37:06.775 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:37:06.775 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:06.775 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:06.775 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:06.775 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=83151 00:37:06.775 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:37:06.775 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 83151 00:37:06.775 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83151 ']' 00:37:06.775 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:06.775 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:06.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:06.776 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:06.776 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:06.776 20:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:06.776 [2024-10-01 20:36:01.891172] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:37:06.776 [2024-10-01 20:36:01.891285] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:07.035 [2024-10-01 20:36:02.032575] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:07.035 [2024-10-01 20:36:02.090573] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:07.035 [2024-10-01 20:36:02.090635] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:07.035 [2024-10-01 20:36:02.090645] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:07.035 [2024-10-01 20:36:02.090658] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:07.035 [2024-10-01 20:36:02.090664] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:07.035 [2024-10-01 20:36:02.090693] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:07.602 20:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:07.602 20:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:37:07.602 20:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:07.602 20:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:07.602 20:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:07.602 20:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:07.602 20:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.gTlJJjye3R 00:37:07.602 20:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.gTlJJjye3R 00:37:07.602 20:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:37:07.861 [2024-10-01 20:36:03.112879] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:08.120 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:37:08.120 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:37:08.379 [2024-10-01 20:36:03.568950] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:08.379 [2024-10-01 20:36:03.569238] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:37:08.379 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:37:08.639 malloc0 00:37:08.639 20:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:37:08.899 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.gTlJJjye3R 00:37:09.158 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:37:09.417 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gTlJJjye3R 00:37:09.417 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:37:09.417 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:37:09.417 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:37:09.417 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.gTlJJjye3R 00:37:09.417 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:09.417 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83261 00:37:09.417 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:09.417 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:37:09.417 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83261 /var/tmp/bdevperf.sock 00:37:09.417 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83261 ']' 00:37:09.417 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:09.417 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:09.417 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:09.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:09.417 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:09.417 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:09.417 [2024-10-01 20:36:04.525514] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:37:09.417 [2024-10-01 20:36:04.525615] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83261 ] 00:37:09.417 [2024-10-01 20:36:04.663860] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:09.677 [2024-10-01 20:36:04.714444] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:37:09.677 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:09.677 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:37:09.677 20:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gTlJJjye3R 00:37:09.936 20:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:37:10.195 [2024-10-01 20:36:05.270576] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:10.195 TLSTESTn1 00:37:10.195 20:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:37:10.453 Running I/O for 10 seconds... 00:37:20.289 4489.00 IOPS, 17.54 MiB/s 4546.00 IOPS, 17.76 MiB/s 4568.00 IOPS, 17.84 MiB/s 4630.75 IOPS, 18.09 MiB/s 4655.00 IOPS, 18.18 MiB/s 4685.67 IOPS, 18.30 MiB/s 4701.86 IOPS, 18.37 MiB/s 4712.75 IOPS, 18.41 MiB/s 4718.22 IOPS, 18.43 MiB/s 4721.50 IOPS, 18.44 MiB/s 00:37:20.289 Latency(us) 00:37:20.289 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:20.289 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:37:20.289 Verification LBA range: start 0x0 length 0x2000 00:37:20.289 TLSTESTn1 : 10.02 4727.00 18.46 0.00 0.00 27031.60 5272.67 20614.05 00:37:20.289 =================================================================================================================== 00:37:20.289 Total : 4727.00 18.46 0.00 0.00 27031.60 5272.67 20614.05 00:37:20.289 { 00:37:20.289 "results": [ 00:37:20.289 { 00:37:20.289 "job": "TLSTESTn1", 00:37:20.289 "core_mask": "0x4", 00:37:20.289 "workload": "verify", 00:37:20.289 "status": "finished", 00:37:20.289 "verify_range": { 00:37:20.289 "start": 0, 00:37:20.289 "length": 8192 00:37:20.289 }, 00:37:20.289 "queue_depth": 128, 00:37:20.289 "io_size": 4096, 00:37:20.289 "runtime": 10.015021, 00:37:20.289 "iops": 4726.999573939985, 00:37:20.289 "mibps": 18.464842085703065, 00:37:20.289 "io_failed": 0, 00:37:20.289 "io_timeout": 0, 00:37:20.289 "avg_latency_us": 27031.602776144453, 00:37:20.289 "min_latency_us": 5272.669090909091, 00:37:20.289 "max_latency_us": 20614.05090909091 00:37:20.289 } 00:37:20.289 ], 00:37:20.289 "core_count": 1 00:37:20.289 } 00:37:20.289 20:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:20.289 20:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 83261 00:37:20.289 20:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83261 ']' 00:37:20.289 20:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83261 00:37:20.289 20:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:37:20.289 20:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:20.289 20:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83261 00:37:20.289 20:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:37:20.289 20:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:37:20.289 killing process with pid 83261 00:37:20.289 20:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83261' 00:37:20.289 Received shutdown signal, test time was about 10.000000 seconds 00:37:20.289 00:37:20.289 Latency(us) 00:37:20.289 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:20.289 =================================================================================================================== 00:37:20.289 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:20.289 20:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83261 00:37:20.289 20:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83261 00:37:20.561 20:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.gTlJJjye3R 00:37:20.561 20:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gTlJJjye3R 00:37:20.561 20:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:37:20.561 20:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gTlJJjye3R 00:37:20.561 20:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:37:20.561 20:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:20.561 20:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:37:20.561 20:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:20.561 20:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gTlJJjye3R 00:37:20.561 20:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:37:20.561 20:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:37:20.561 20:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:37:20.561 20:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.gTlJJjye3R 00:37:20.561 20:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:20.561 20:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83408 00:37:20.561 20:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:20.561 20:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:37:20.561 20:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83408 /var/tmp/bdevperf.sock 00:37:20.561 20:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83408 ']' 00:37:20.561 20:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:20.561 20:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:20.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:20.561 20:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:20.561 20:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:20.561 20:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:20.561 [2024-10-01 20:36:15.779405] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:37:20.561 [2024-10-01 20:36:15.779512] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83408 ] 00:37:20.820 [2024-10-01 20:36:15.915786] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:20.820 [2024-10-01 20:36:15.966130] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:37:21.079 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:21.079 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:37:21.079 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gTlJJjye3R 00:37:21.079 [2024-10-01 20:36:16.289569] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.gTlJJjye3R': 0100666 00:37:21.079 [2024-10-01 20:36:16.289616] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:21.079 2024/10/01 20:36:16 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.gTlJJjye3R], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:37:21.079 request: 00:37:21.079 { 00:37:21.079 "method": "keyring_file_add_key", 00:37:21.079 "params": { 00:37:21.079 "name": "key0", 00:37:21.079 "path": "/tmp/tmp.gTlJJjye3R" 00:37:21.079 } 00:37:21.079 } 00:37:21.079 Got JSON-RPC error response 00:37:21.079 GoRPCClient: error on JSON-RPC call 00:37:21.079 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:37:21.337 [2024-10-01 20:36:16.577715] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:21.337 [2024-10-01 20:36:16.577773] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:37:21.337 2024/10/01 20:36:16 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:37:21.337 request: 00:37:21.337 { 00:37:21.337 "method": "bdev_nvme_attach_controller", 00:37:21.337 "params": { 00:37:21.337 "name": "TLSTEST", 00:37:21.337 "trtype": "tcp", 00:37:21.337 "traddr": "10.0.0.3", 00:37:21.337 "adrfam": "ipv4", 00:37:21.337 "trsvcid": "4420", 00:37:21.337 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:21.337 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:21.337 "prchk_reftag": false, 00:37:21.337 "prchk_guard": false, 00:37:21.337 "hdgst": false, 00:37:21.337 "ddgst": false, 00:37:21.337 "psk": "key0", 00:37:21.337 "allow_unrecognized_csi": false 00:37:21.337 } 00:37:21.337 } 00:37:21.337 Got JSON-RPC error response 00:37:21.337 GoRPCClient: error on JSON-RPC call 00:37:21.597 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83408 00:37:21.597 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83408 ']' 00:37:21.597 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83408 00:37:21.597 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:37:21.597 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:21.597 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83408 00:37:21.597 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:37:21.597 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:37:21.597 killing process with pid 83408 00:37:21.597 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83408' 00:37:21.597 Received shutdown signal, test time was about 10.000000 seconds 00:37:21.597 00:37:21.597 Latency(us) 00:37:21.597 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:21.597 =================================================================================================================== 00:37:21.597 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:21.597 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83408 00:37:21.597 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83408 00:37:21.597 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:37:21.597 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:37:21.597 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:21.597 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:21.597 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:21.597 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 83151 00:37:21.597 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83151 ']' 00:37:21.597 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83151 00:37:21.597 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:37:21.597 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:21.597 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83151 00:37:21.597 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:21.597 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:21.597 killing process with pid 83151 00:37:21.597 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83151' 00:37:21.856 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83151 00:37:21.856 20:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83151 00:37:21.856 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:37:21.856 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:21.856 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:21.856 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:21.856 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=83452 00:37:21.856 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:37:21.856 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 83452 00:37:21.856 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83452 ']' 00:37:21.856 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:21.856 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:21.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:21.856 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:21.856 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:21.856 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:21.856 [2024-10-01 20:36:17.102326] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:37:21.857 [2024-10-01 20:36:17.102404] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:22.116 [2024-10-01 20:36:17.236779] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:22.116 [2024-10-01 20:36:17.290445] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:22.116 [2024-10-01 20:36:17.290491] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:22.116 [2024-10-01 20:36:17.290518] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:22.116 [2024-10-01 20:36:17.290540] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:22.116 [2024-10-01 20:36:17.290546] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:22.116 [2024-10-01 20:36:17.290570] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:22.375 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:22.375 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:37:22.375 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:22.375 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:22.376 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:22.376 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:22.376 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.gTlJJjye3R 00:37:22.376 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:37:22.376 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.gTlJJjye3R 00:37:22.376 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:37:22.376 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:22.376 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:37:22.376 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:22.376 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.gTlJJjye3R 00:37:22.376 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.gTlJJjye3R 00:37:22.376 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:37:22.634 [2024-10-01 20:36:17.730072] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:22.634 20:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:37:22.893 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:37:23.152 [2024-10-01 20:36:18.278196] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:23.152 [2024-10-01 20:36:18.278424] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:37:23.152 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:37:23.411 malloc0 00:37:23.411 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:37:23.669 20:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.gTlJJjye3R 00:37:23.929 [2024-10-01 20:36:19.092537] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.gTlJJjye3R': 0100666 00:37:23.929 [2024-10-01 20:36:19.092587] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:23.929 2024/10/01 20:36:19 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.gTlJJjye3R], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:37:23.929 request: 00:37:23.929 { 00:37:23.929 "method": "keyring_file_add_key", 00:37:23.929 "params": { 00:37:23.929 "name": "key0", 00:37:23.929 "path": "/tmp/tmp.gTlJJjye3R" 00:37:23.929 } 00:37:23.929 } 00:37:23.929 Got JSON-RPC error response 00:37:23.929 GoRPCClient: error on JSON-RPC call 00:37:23.929 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:37:24.188 [2024-10-01 20:36:19.368654] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:37:24.188 [2024-10-01 20:36:19.368724] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:37:24.188 2024/10/01 20:36:19 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:key0], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:37:24.188 request: 00:37:24.188 { 00:37:24.188 "method": "nvmf_subsystem_add_host", 00:37:24.188 "params": { 00:37:24.188 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:37:24.188 "host": "nqn.2016-06.io.spdk:host1", 00:37:24.188 "psk": "key0" 00:37:24.188 } 00:37:24.188 } 00:37:24.188 Got JSON-RPC error response 00:37:24.188 GoRPCClient: error on JSON-RPC call 00:37:24.188 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:37:24.188 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:24.188 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:24.188 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:24.188 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 83452 00:37:24.188 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83452 ']' 00:37:24.188 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83452 00:37:24.188 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:37:24.188 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:24.188 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83452 00:37:24.188 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:24.188 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:24.188 killing process with pid 83452 00:37:24.188 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83452' 00:37:24.188 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83452 00:37:24.188 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83452 00:37:24.447 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.gTlJJjye3R 00:37:24.447 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:37:24.447 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:24.447 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:24.447 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:24.447 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=83566 00:37:24.447 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:37:24.447 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 83566 00:37:24.447 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83566 ']' 00:37:24.447 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:24.447 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:24.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:24.447 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:24.447 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:24.447 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:24.447 [2024-10-01 20:36:19.688626] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:37:24.447 [2024-10-01 20:36:19.688727] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:24.706 [2024-10-01 20:36:19.819453] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:24.706 [2024-10-01 20:36:19.869324] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:24.706 [2024-10-01 20:36:19.869392] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:24.706 [2024-10-01 20:36:19.869417] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:24.706 [2024-10-01 20:36:19.869424] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:24.706 [2024-10-01 20:36:19.869430] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:24.706 [2024-10-01 20:36:19.869460] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:24.964 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:24.964 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:37:24.964 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:24.964 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:24.964 20:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:24.965 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:24.965 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.gTlJJjye3R 00:37:24.965 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.gTlJJjye3R 00:37:24.965 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:37:25.223 [2024-10-01 20:36:20.235847] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:25.223 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:37:25.482 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:37:25.741 [2024-10-01 20:36:20.791990] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:25.741 [2024-10-01 20:36:20.792300] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:37:25.741 20:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:37:26.000 malloc0 00:37:26.000 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:37:26.259 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.gTlJJjye3R 00:37:26.517 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:37:26.775 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=83662 00:37:26.775 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:37:26.775 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:26.775 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 83662 /var/tmp/bdevperf.sock 00:37:26.775 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83662 ']' 00:37:26.775 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:26.775 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:26.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:26.776 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:26.776 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:26.776 20:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:26.776 [2024-10-01 20:36:21.906648] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:37:26.776 [2024-10-01 20:36:21.906755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83662 ] 00:37:27.033 [2024-10-01 20:36:22.042351] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:27.033 [2024-10-01 20:36:22.111956] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:37:27.968 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:27.968 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:37:27.968 20:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gTlJJjye3R 00:37:27.968 20:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:37:28.225 [2024-10-01 20:36:23.300628] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:28.225 TLSTESTn1 00:37:28.225 20:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:37:28.791 20:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:37:28.791 "subsystems": [ 00:37:28.791 { 00:37:28.791 "subsystem": "keyring", 00:37:28.791 "config": [ 00:37:28.791 { 00:37:28.791 "method": "keyring_file_add_key", 00:37:28.791 "params": { 00:37:28.791 "name": "key0", 00:37:28.791 "path": "/tmp/tmp.gTlJJjye3R" 00:37:28.791 } 00:37:28.791 } 00:37:28.791 ] 00:37:28.791 }, 00:37:28.791 { 00:37:28.791 "subsystem": "iobuf", 00:37:28.791 "config": [ 00:37:28.791 { 00:37:28.791 "method": "iobuf_set_options", 00:37:28.791 "params": { 00:37:28.791 "large_bufsize": 135168, 00:37:28.791 "large_pool_count": 1024, 00:37:28.791 "small_bufsize": 8192, 00:37:28.791 "small_pool_count": 8192 00:37:28.791 } 00:37:28.791 } 00:37:28.791 ] 00:37:28.791 }, 00:37:28.791 { 00:37:28.791 "subsystem": "sock", 00:37:28.791 "config": [ 00:37:28.791 { 00:37:28.791 "method": "sock_set_default_impl", 00:37:28.791 "params": { 00:37:28.791 "impl_name": "posix" 00:37:28.791 } 00:37:28.791 }, 00:37:28.791 { 00:37:28.791 "method": "sock_impl_set_options", 00:37:28.791 "params": { 00:37:28.791 "enable_ktls": false, 00:37:28.791 "enable_placement_id": 0, 00:37:28.791 "enable_quickack": false, 00:37:28.791 "enable_recv_pipe": true, 00:37:28.791 "enable_zerocopy_send_client": false, 00:37:28.791 "enable_zerocopy_send_server": true, 00:37:28.791 "impl_name": "ssl", 00:37:28.791 "recv_buf_size": 4096, 00:37:28.791 "send_buf_size": 4096, 00:37:28.791 "tls_version": 0, 00:37:28.791 "zerocopy_threshold": 0 00:37:28.791 } 00:37:28.791 }, 00:37:28.791 { 00:37:28.791 "method": "sock_impl_set_options", 00:37:28.791 "params": { 00:37:28.791 "enable_ktls": false, 00:37:28.791 "enable_placement_id": 0, 00:37:28.791 "enable_quickack": false, 00:37:28.791 "enable_recv_pipe": true, 00:37:28.791 "enable_zerocopy_send_client": false, 00:37:28.791 "enable_zerocopy_send_server": true, 00:37:28.791 "impl_name": "posix", 00:37:28.791 "recv_buf_size": 2097152, 00:37:28.791 "send_buf_size": 2097152, 00:37:28.791 "tls_version": 0, 00:37:28.791 "zerocopy_threshold": 0 00:37:28.791 } 00:37:28.791 } 00:37:28.791 ] 00:37:28.792 }, 00:37:28.792 { 00:37:28.792 "subsystem": "vmd", 00:37:28.792 "config": [] 00:37:28.792 }, 00:37:28.792 { 00:37:28.792 "subsystem": "accel", 00:37:28.792 "config": [ 00:37:28.792 { 00:37:28.792 "method": "accel_set_options", 00:37:28.792 "params": { 00:37:28.792 "buf_count": 2048, 00:37:28.792 "large_cache_size": 16, 00:37:28.792 "sequence_count": 2048, 00:37:28.792 "small_cache_size": 128, 00:37:28.792 "task_count": 2048 00:37:28.792 } 00:37:28.792 } 00:37:28.792 ] 00:37:28.792 }, 00:37:28.792 { 00:37:28.792 "subsystem": "bdev", 00:37:28.792 "config": [ 00:37:28.792 { 00:37:28.792 "method": "bdev_set_options", 00:37:28.792 "params": { 00:37:28.792 "bdev_auto_examine": true, 00:37:28.792 "bdev_io_cache_size": 256, 00:37:28.792 "bdev_io_pool_size": 65535, 00:37:28.792 "bdev_io_stack_size": 4096, 00:37:28.792 "iobuf_large_cache_size": 16, 00:37:28.792 "iobuf_small_cache_size": 128 00:37:28.792 } 00:37:28.792 }, 00:37:28.792 { 00:37:28.792 "method": "bdev_raid_set_options", 00:37:28.792 "params": { 00:37:28.792 "process_max_bandwidth_mb_sec": 0, 00:37:28.792 "process_window_size_kb": 1024 00:37:28.792 } 00:37:28.792 }, 00:37:28.792 { 00:37:28.792 "method": "bdev_iscsi_set_options", 00:37:28.792 "params": { 00:37:28.792 "timeout_sec": 30 00:37:28.792 } 00:37:28.792 }, 00:37:28.792 { 00:37:28.792 "method": "bdev_nvme_set_options", 00:37:28.792 "params": { 00:37:28.792 "action_on_timeout": "none", 00:37:28.792 "allow_accel_sequence": false, 00:37:28.792 "arbitration_burst": 0, 00:37:28.792 "bdev_retry_count": 3, 00:37:28.792 "ctrlr_loss_timeout_sec": 0, 00:37:28.792 "delay_cmd_submit": true, 00:37:28.792 "dhchap_dhgroups": [ 00:37:28.792 "null", 00:37:28.792 "ffdhe2048", 00:37:28.792 "ffdhe3072", 00:37:28.792 "ffdhe4096", 00:37:28.792 "ffdhe6144", 00:37:28.792 "ffdhe8192" 00:37:28.792 ], 00:37:28.792 "dhchap_digests": [ 00:37:28.792 "sha256", 00:37:28.792 "sha384", 00:37:28.792 "sha512" 00:37:28.792 ], 00:37:28.792 "disable_auto_failback": false, 00:37:28.792 "fast_io_fail_timeout_sec": 0, 00:37:28.792 "generate_uuids": false, 00:37:28.792 "high_priority_weight": 0, 00:37:28.792 "io_path_stat": false, 00:37:28.792 "io_queue_requests": 0, 00:37:28.792 "keep_alive_timeout_ms": 10000, 00:37:28.792 "low_priority_weight": 0, 00:37:28.792 "medium_priority_weight": 0, 00:37:28.792 "nvme_adminq_poll_period_us": 10000, 00:37:28.792 "nvme_error_stat": false, 00:37:28.792 "nvme_ioq_poll_period_us": 0, 00:37:28.792 "rdma_cm_event_timeout_ms": 0, 00:37:28.792 "rdma_max_cq_size": 0, 00:37:28.792 "rdma_srq_size": 0, 00:37:28.792 "reconnect_delay_sec": 0, 00:37:28.792 "timeout_admin_us": 0, 00:37:28.792 "timeout_us": 0, 00:37:28.792 "transport_ack_timeout": 0, 00:37:28.792 "transport_retry_count": 4, 00:37:28.792 "transport_tos": 0 00:37:28.792 } 00:37:28.792 }, 00:37:28.792 { 00:37:28.792 "method": "bdev_nvme_set_hotplug", 00:37:28.792 "params": { 00:37:28.792 "enable": false, 00:37:28.792 "period_us": 100000 00:37:28.792 } 00:37:28.792 }, 00:37:28.792 { 00:37:28.792 "method": "bdev_malloc_create", 00:37:28.792 "params": { 00:37:28.792 "block_size": 4096, 00:37:28.792 "dif_is_head_of_md": false, 00:37:28.792 "dif_pi_format": 0, 00:37:28.792 "dif_type": 0, 00:37:28.792 "md_size": 0, 00:37:28.792 "name": "malloc0", 00:37:28.792 "num_blocks": 8192, 00:37:28.792 "optimal_io_boundary": 0, 00:37:28.792 "physical_block_size": 4096, 00:37:28.792 "uuid": "957204bd-9afa-4aa5-b509-36b0fc1f49b3" 00:37:28.792 } 00:37:28.792 }, 00:37:28.792 { 00:37:28.792 "method": "bdev_wait_for_examine" 00:37:28.792 } 00:37:28.792 ] 00:37:28.792 }, 00:37:28.792 { 00:37:28.792 "subsystem": "nbd", 00:37:28.792 "config": [] 00:37:28.792 }, 00:37:28.792 { 00:37:28.792 "subsystem": "scheduler", 00:37:28.792 "config": [ 00:37:28.792 { 00:37:28.792 "method": "framework_set_scheduler", 00:37:28.792 "params": { 00:37:28.792 "name": "static" 00:37:28.792 } 00:37:28.792 } 00:37:28.792 ] 00:37:28.792 }, 00:37:28.792 { 00:37:28.792 "subsystem": "nvmf", 00:37:28.792 "config": [ 00:37:28.792 { 00:37:28.792 "method": "nvmf_set_config", 00:37:28.792 "params": { 00:37:28.792 "admin_cmd_passthru": { 00:37:28.792 "identify_ctrlr": false 00:37:28.792 }, 00:37:28.792 "dhchap_dhgroups": [ 00:37:28.792 "null", 00:37:28.792 "ffdhe2048", 00:37:28.792 "ffdhe3072", 00:37:28.792 "ffdhe4096", 00:37:28.792 "ffdhe6144", 00:37:28.792 "ffdhe8192" 00:37:28.792 ], 00:37:28.792 "dhchap_digests": [ 00:37:28.792 "sha256", 00:37:28.792 "sha384", 00:37:28.792 "sha512" 00:37:28.792 ], 00:37:28.792 "discovery_filter": "match_any" 00:37:28.792 } 00:37:28.792 }, 00:37:28.792 { 00:37:28.792 "method": "nvmf_set_max_subsystems", 00:37:28.792 "params": { 00:37:28.792 "max_subsystems": 1024 00:37:28.792 } 00:37:28.792 }, 00:37:28.792 { 00:37:28.792 "method": "nvmf_set_crdt", 00:37:28.792 "params": { 00:37:28.792 "crdt1": 0, 00:37:28.792 "crdt2": 0, 00:37:28.792 "crdt3": 0 00:37:28.792 } 00:37:28.792 }, 00:37:28.792 { 00:37:28.792 "method": "nvmf_create_transport", 00:37:28.792 "params": { 00:37:28.792 "abort_timeout_sec": 1, 00:37:28.792 "ack_timeout": 0, 00:37:28.792 "buf_cache_size": 4294967295, 00:37:28.792 "c2h_success": false, 00:37:28.792 "data_wr_pool_size": 0, 00:37:28.792 "dif_insert_or_strip": false, 00:37:28.792 "in_capsule_data_size": 4096, 00:37:28.792 "io_unit_size": 131072, 00:37:28.792 "max_aq_depth": 128, 00:37:28.792 "max_io_qpairs_per_ctrlr": 127, 00:37:28.793 "max_io_size": 131072, 00:37:28.793 "max_queue_depth": 128, 00:37:28.793 "num_shared_buffers": 511, 00:37:28.793 "sock_priority": 0, 00:37:28.793 "trtype": "TCP", 00:37:28.793 "zcopy": false 00:37:28.793 } 00:37:28.793 }, 00:37:28.793 { 00:37:28.793 "method": "nvmf_create_subsystem", 00:37:28.793 "params": { 00:37:28.793 "allow_any_host": false, 00:37:28.793 "ana_reporting": false, 00:37:28.793 "max_cntlid": 65519, 00:37:28.793 "max_namespaces": 10, 00:37:28.793 "min_cntlid": 1, 00:37:28.793 "model_number": "SPDK bdev Controller", 00:37:28.793 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:37:28.793 "serial_number": "SPDK00000000000001" 00:37:28.793 } 00:37:28.793 }, 00:37:28.793 { 00:37:28.793 "method": "nvmf_subsystem_add_host", 00:37:28.793 "params": { 00:37:28.793 "host": "nqn.2016-06.io.spdk:host1", 00:37:28.793 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:37:28.793 "psk": "key0" 00:37:28.793 } 00:37:28.793 }, 00:37:28.793 { 00:37:28.793 "method": "nvmf_subsystem_add_ns", 00:37:28.793 "params": { 00:37:28.793 "namespace": { 00:37:28.793 "bdev_name": "malloc0", 00:37:28.793 "nguid": "957204BD9AFA4AA5B50936B0FC1F49B3", 00:37:28.793 "no_auto_visible": false, 00:37:28.793 "nsid": 1, 00:37:28.793 "uuid": "957204bd-9afa-4aa5-b509-36b0fc1f49b3" 00:37:28.793 }, 00:37:28.793 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:37:28.793 } 00:37:28.793 }, 00:37:28.793 { 00:37:28.793 "method": "nvmf_subsystem_add_listener", 00:37:28.793 "params": { 00:37:28.793 "listen_address": { 00:37:28.793 "adrfam": "IPv4", 00:37:28.793 "traddr": "10.0.0.3", 00:37:28.793 "trsvcid": "4420", 00:37:28.793 "trtype": "TCP" 00:37:28.793 }, 00:37:28.793 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:37:28.793 "secure_channel": true 00:37:28.793 } 00:37:28.793 } 00:37:28.793 ] 00:37:28.793 } 00:37:28.793 ] 00:37:28.793 }' 00:37:28.793 20:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:37:28.793 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:37:28.793 "subsystems": [ 00:37:28.793 { 00:37:28.793 "subsystem": "keyring", 00:37:28.793 "config": [ 00:37:28.793 { 00:37:28.793 "method": "keyring_file_add_key", 00:37:28.793 "params": { 00:37:28.793 "name": "key0", 00:37:28.793 "path": "/tmp/tmp.gTlJJjye3R" 00:37:28.793 } 00:37:28.793 } 00:37:28.793 ] 00:37:28.793 }, 00:37:28.793 { 00:37:28.793 "subsystem": "iobuf", 00:37:28.793 "config": [ 00:37:28.793 { 00:37:28.793 "method": "iobuf_set_options", 00:37:28.793 "params": { 00:37:28.793 "large_bufsize": 135168, 00:37:28.793 "large_pool_count": 1024, 00:37:28.793 "small_bufsize": 8192, 00:37:28.793 "small_pool_count": 8192 00:37:28.793 } 00:37:28.793 } 00:37:28.793 ] 00:37:28.793 }, 00:37:28.793 { 00:37:28.793 "subsystem": "sock", 00:37:28.793 "config": [ 00:37:28.793 { 00:37:28.793 "method": "sock_set_default_impl", 00:37:28.793 "params": { 00:37:28.793 "impl_name": "posix" 00:37:28.793 } 00:37:28.793 }, 00:37:28.793 { 00:37:28.793 "method": "sock_impl_set_options", 00:37:28.793 "params": { 00:37:28.793 "enable_ktls": false, 00:37:28.793 "enable_placement_id": 0, 00:37:28.793 "enable_quickack": false, 00:37:28.793 "enable_recv_pipe": true, 00:37:28.793 "enable_zerocopy_send_client": false, 00:37:28.793 "enable_zerocopy_send_server": true, 00:37:28.793 "impl_name": "ssl", 00:37:28.793 "recv_buf_size": 4096, 00:37:28.793 "send_buf_size": 4096, 00:37:28.793 "tls_version": 0, 00:37:28.793 "zerocopy_threshold": 0 00:37:28.793 } 00:37:28.793 }, 00:37:28.793 { 00:37:28.793 "method": "sock_impl_set_options", 00:37:28.793 "params": { 00:37:28.793 "enable_ktls": false, 00:37:28.793 "enable_placement_id": 0, 00:37:28.793 "enable_quickack": false, 00:37:28.793 "enable_recv_pipe": true, 00:37:28.793 "enable_zerocopy_send_client": false, 00:37:28.793 "enable_zerocopy_send_server": true, 00:37:28.793 "impl_name": "posix", 00:37:28.793 "recv_buf_size": 2097152, 00:37:28.793 "send_buf_size": 2097152, 00:37:28.793 "tls_version": 0, 00:37:28.793 "zerocopy_threshold": 0 00:37:28.793 } 00:37:28.793 } 00:37:28.793 ] 00:37:28.793 }, 00:37:28.793 { 00:37:28.793 "subsystem": "vmd", 00:37:28.793 "config": [] 00:37:28.793 }, 00:37:28.793 { 00:37:28.793 "subsystem": "accel", 00:37:28.793 "config": [ 00:37:28.793 { 00:37:28.793 "method": "accel_set_options", 00:37:28.793 "params": { 00:37:28.793 "buf_count": 2048, 00:37:28.793 "large_cache_size": 16, 00:37:28.793 "sequence_count": 2048, 00:37:28.793 "small_cache_size": 128, 00:37:28.793 "task_count": 2048 00:37:28.793 } 00:37:28.793 } 00:37:28.793 ] 00:37:28.793 }, 00:37:28.793 { 00:37:28.793 "subsystem": "bdev", 00:37:28.793 "config": [ 00:37:28.793 { 00:37:28.793 "method": "bdev_set_options", 00:37:28.793 "params": { 00:37:28.793 "bdev_auto_examine": true, 00:37:28.793 "bdev_io_cache_size": 256, 00:37:28.793 "bdev_io_pool_size": 65535, 00:37:28.793 "bdev_io_stack_size": 4096, 00:37:28.793 "iobuf_large_cache_size": 16, 00:37:28.793 "iobuf_small_cache_size": 128 00:37:28.793 } 00:37:28.793 }, 00:37:28.793 { 00:37:28.793 "method": "bdev_raid_set_options", 00:37:28.793 "params": { 00:37:28.793 "process_max_bandwidth_mb_sec": 0, 00:37:28.793 "process_window_size_kb": 1024 00:37:28.793 } 00:37:28.793 }, 00:37:28.794 { 00:37:28.794 "method": "bdev_iscsi_set_options", 00:37:28.794 "params": { 00:37:28.794 "timeout_sec": 30 00:37:28.794 } 00:37:28.794 }, 00:37:28.794 { 00:37:28.794 "method": "bdev_nvme_set_options", 00:37:28.794 "params": { 00:37:28.794 "action_on_timeout": "none", 00:37:28.794 "allow_accel_sequence": false, 00:37:28.794 "arbitration_burst": 0, 00:37:28.794 "bdev_retry_count": 3, 00:37:28.794 "ctrlr_loss_timeout_sec": 0, 00:37:28.794 "delay_cmd_submit": true, 00:37:28.794 "dhchap_dhgroups": [ 00:37:28.794 "null", 00:37:28.794 "ffdhe2048", 00:37:28.794 "ffdhe3072", 00:37:28.794 "ffdhe4096", 00:37:28.794 "ffdhe6144", 00:37:28.794 "ffdhe8192" 00:37:28.794 ], 00:37:28.794 "dhchap_digests": [ 00:37:28.794 "sha256", 00:37:28.794 "sha384", 00:37:28.794 "sha512" 00:37:28.794 ], 00:37:28.794 "disable_auto_failback": false, 00:37:28.794 "fast_io_fail_timeout_sec": 0, 00:37:28.794 "generate_uuids": false, 00:37:28.794 "high_priority_weight": 0, 00:37:28.794 "io_path_stat": false, 00:37:28.794 "io_queue_requests": 512, 00:37:28.794 "keep_alive_timeout_ms": 10000, 00:37:28.794 "low_priority_weight": 0, 00:37:28.794 "medium_priority_weight": 0, 00:37:28.794 "nvme_adminq_poll_period_us": 10000, 00:37:28.794 "nvme_error_stat": false, 00:37:28.794 "nvme_ioq_poll_period_us": 0, 00:37:28.794 "rdma_cm_event_timeout_ms": 0, 00:37:28.794 "rdma_max_cq_size": 0, 00:37:28.794 "rdma_srq_size": 0, 00:37:28.794 "reconnect_delay_sec": 0, 00:37:28.794 "timeout_admin_us": 0, 00:37:28.794 "timeout_us": 0, 00:37:28.794 "transport_ack_timeout": 0, 00:37:28.794 "transport_retry_count": 4, 00:37:28.794 "transport_tos": 0 00:37:28.794 } 00:37:28.794 }, 00:37:28.794 { 00:37:28.794 "method": "bdev_nvme_attach_controller", 00:37:28.794 "params": { 00:37:28.794 "adrfam": "IPv4", 00:37:28.794 "ctrlr_loss_timeout_sec": 0, 00:37:28.794 "ddgst": false, 00:37:28.794 "fast_io_fail_timeout_sec": 0, 00:37:28.794 "hdgst": false, 00:37:28.794 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:28.794 "name": "TLSTEST", 00:37:28.794 "prchk_guard": false, 00:37:28.794 "prchk_reftag": false, 00:37:28.794 "psk": "key0", 00:37:28.794 "reconnect_delay_sec": 0, 00:37:28.794 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:28.794 "traddr": "10.0.0.3", 00:37:28.794 "trsvcid": "4420", 00:37:28.794 "trtype": "TCP" 00:37:28.794 } 00:37:28.794 }, 00:37:28.794 { 00:37:28.794 "method": "bdev_nvme_set_hotplug", 00:37:28.794 "params": { 00:37:28.794 "enable": false, 00:37:28.794 "period_us": 100000 00:37:28.794 } 00:37:28.794 }, 00:37:28.794 { 00:37:28.794 "method": "bdev_wait_for_examine" 00:37:28.794 } 00:37:28.794 ] 00:37:28.794 }, 00:37:28.794 { 00:37:28.794 "subsystem": "nbd", 00:37:28.794 "config": [] 00:37:28.794 } 00:37:28.794 ] 00:37:28.794 }' 00:37:28.794 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 83662 00:37:28.794 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83662 ']' 00:37:28.794 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83662 00:37:28.794 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:37:28.794 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:28.794 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83662 00:37:29.052 killing process with pid 83662 00:37:29.052 Received shutdown signal, test time was about 10.000000 seconds 00:37:29.052 00:37:29.052 Latency(us) 00:37:29.052 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:29.052 =================================================================================================================== 00:37:29.052 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:29.052 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:37:29.052 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:37:29.052 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83662' 00:37:29.052 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83662 00:37:29.052 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83662 00:37:29.052 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 83566 00:37:29.052 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83566 ']' 00:37:29.052 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83566 00:37:29.052 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:37:29.052 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:29.052 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83566 00:37:29.052 killing process with pid 83566 00:37:29.052 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:29.052 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:29.052 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83566' 00:37:29.052 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83566 00:37:29.052 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83566 00:37:29.310 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:37:29.310 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:29.310 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:29.310 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:29.310 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:37:29.310 "subsystems": [ 00:37:29.310 { 00:37:29.310 "subsystem": "keyring", 00:37:29.310 "config": [ 00:37:29.310 { 00:37:29.310 "method": "keyring_file_add_key", 00:37:29.310 "params": { 00:37:29.310 "name": "key0", 00:37:29.310 "path": "/tmp/tmp.gTlJJjye3R" 00:37:29.310 } 00:37:29.310 } 00:37:29.310 ] 00:37:29.310 }, 00:37:29.310 { 00:37:29.310 "subsystem": "iobuf", 00:37:29.310 "config": [ 00:37:29.310 { 00:37:29.310 "method": "iobuf_set_options", 00:37:29.310 "params": { 00:37:29.310 "large_bufsize": 135168, 00:37:29.310 "large_pool_count": 1024, 00:37:29.310 "small_bufsize": 8192, 00:37:29.310 "small_pool_count": 8192 00:37:29.310 } 00:37:29.310 } 00:37:29.310 ] 00:37:29.310 }, 00:37:29.310 { 00:37:29.310 "subsystem": "sock", 00:37:29.310 "config": [ 00:37:29.310 { 00:37:29.310 "method": "sock_set_default_impl", 00:37:29.310 "params": { 00:37:29.310 "impl_name": "posix" 00:37:29.310 } 00:37:29.310 }, 00:37:29.310 { 00:37:29.310 "method": "sock_impl_set_options", 00:37:29.310 "params": { 00:37:29.310 "enable_ktls": false, 00:37:29.310 "enable_placement_id": 0, 00:37:29.310 "enable_quickack": false, 00:37:29.310 "enable_recv_pipe": true, 00:37:29.310 "enable_zerocopy_send_client": false, 00:37:29.310 "enable_zerocopy_send_server": true, 00:37:29.310 "impl_name": "ssl", 00:37:29.311 "recv_buf_size": 4096, 00:37:29.311 "send_buf_size": 4096, 00:37:29.311 "tls_version": 0, 00:37:29.311 "zerocopy_threshold": 0 00:37:29.311 } 00:37:29.311 }, 00:37:29.311 { 00:37:29.311 "method": "sock_impl_set_options", 00:37:29.311 "params": { 00:37:29.311 "enable_ktls": false, 00:37:29.311 "enable_placement_id": 0, 00:37:29.311 "enable_quickack": false, 00:37:29.311 "enable_recv_pipe": true, 00:37:29.311 "enable_zerocopy_send_client": false, 00:37:29.311 "enable_zerocopy_send_server": true, 00:37:29.311 "impl_name": "posix", 00:37:29.311 "recv_buf_size": 2097152, 00:37:29.311 "send_buf_size": 2097152, 00:37:29.311 "tls_version": 0, 00:37:29.311 "zerocopy_threshold": 0 00:37:29.311 } 00:37:29.311 } 00:37:29.311 ] 00:37:29.311 }, 00:37:29.311 { 00:37:29.311 "subsystem": "vmd", 00:37:29.311 "config": [] 00:37:29.311 }, 00:37:29.311 { 00:37:29.311 "subsystem": "accel", 00:37:29.311 "config": [ 00:37:29.311 { 00:37:29.311 "method": "accel_set_options", 00:37:29.311 "params": { 00:37:29.311 "buf_count": 2048, 00:37:29.311 "large_cache_size": 16, 00:37:29.311 "sequence_count": 2048, 00:37:29.311 "small_cache_size": 128, 00:37:29.311 "task_count": 2048 00:37:29.311 } 00:37:29.311 } 00:37:29.311 ] 00:37:29.311 }, 00:37:29.311 { 00:37:29.311 "subsystem": "bdev", 00:37:29.311 "config": [ 00:37:29.311 { 00:37:29.311 "method": "bdev_set_options", 00:37:29.311 "params": { 00:37:29.311 "bdev_auto_examine": true, 00:37:29.311 "bdev_io_cache_size": 256, 00:37:29.311 "bdev_io_pool_size": 65535, 00:37:29.311 "bdev_io_stack_size": 4096, 00:37:29.311 "iobuf_large_cache_size": 16, 00:37:29.311 "iobuf_small_cache_size": 128 00:37:29.311 } 00:37:29.311 }, 00:37:29.311 { 00:37:29.311 "method": "bdev_raid_set_options", 00:37:29.311 "params": { 00:37:29.311 "process_max_bandwidth_mb_sec": 0, 00:37:29.311 "process_window_size_kb": 1024 00:37:29.311 } 00:37:29.311 }, 00:37:29.311 { 00:37:29.311 "method": "bdev_iscsi_set_options", 00:37:29.311 "params": { 00:37:29.311 "timeout_sec": 30 00:37:29.311 } 00:37:29.311 }, 00:37:29.311 { 00:37:29.311 "method": "bdev_nvme_set_options", 00:37:29.311 "params": { 00:37:29.311 "action_on_timeout": "none", 00:37:29.311 "allow_accel_sequence": false, 00:37:29.311 "arbitration_burst": 0, 00:37:29.311 "bdev_retry_count": 3, 00:37:29.311 "ctrlr_loss_timeout_sec": 0, 00:37:29.311 "delay_cmd_submit": true, 00:37:29.311 "dhchap_dhgroups": [ 00:37:29.311 "null", 00:37:29.311 "ffdhe2048", 00:37:29.311 "ffdhe3072", 00:37:29.311 "ffdhe4096", 00:37:29.311 "ffdhe6144", 00:37:29.311 "ffdhe8192" 00:37:29.311 ], 00:37:29.311 "dhchap_digests": [ 00:37:29.311 "sha256", 00:37:29.311 "sha384", 00:37:29.311 "sha512" 00:37:29.311 ], 00:37:29.311 "disable_auto_failback": false, 00:37:29.311 "fast_io_fail_timeout_sec": 0, 00:37:29.311 "generate_uuids": false, 00:37:29.311 "high_priority_weight": 0, 00:37:29.311 "io_path_stat": false, 00:37:29.311 "io_queue_requests": 0, 00:37:29.311 "keep_alive_timeout_ms": 10000, 00:37:29.311 "low_priority_weight": 0, 00:37:29.311 "medium_priority_weight": 0, 00:37:29.311 "nvme_adminq_poll_period_us": 10000, 00:37:29.311 "nvme_error_stat": false, 00:37:29.311 "nvme_ioq_poll_period_us": 0, 00:37:29.311 "rdma_cm_event_timeout_ms": 0, 00:37:29.311 "rdma_max_cq_size": 0, 00:37:29.311 "rdma_srq_size": 0, 00:37:29.311 "reconnect_delay_sec": 0, 00:37:29.311 "timeout_admin_us": 0, 00:37:29.311 "timeout_us": 0, 00:37:29.311 "transport_ack_timeout": 0, 00:37:29.311 "transport_retry_count": 4, 00:37:29.311 "transport_tos": 0 00:37:29.311 } 00:37:29.311 }, 00:37:29.311 { 00:37:29.311 "method": "bdev_nvme_set_hotplug", 00:37:29.311 "params": { 00:37:29.311 "enable": false, 00:37:29.311 "period_us": 100000 00:37:29.311 } 00:37:29.311 }, 00:37:29.311 { 00:37:29.311 "method": "bdev_malloc_create", 00:37:29.311 "params": { 00:37:29.311 "block_size": 4096, 00:37:29.311 "dif_is_head_of_md": false, 00:37:29.311 "dif_pi_format": 0, 00:37:29.311 "dif_type": 0, 00:37:29.311 "md_size": 0, 00:37:29.311 "name": "malloc0", 00:37:29.311 "num_blocks": 8192, 00:37:29.311 "optimal_io_boundary": 0, 00:37:29.311 "physical_block_size": 4096, 00:37:29.311 "uuid": "957204bd-9afa-4aa5-b509-36b0fc1f49b3" 00:37:29.311 } 00:37:29.311 }, 00:37:29.311 { 00:37:29.311 "method": "bdev_wait_for_examine" 00:37:29.311 } 00:37:29.311 ] 00:37:29.311 }, 00:37:29.311 { 00:37:29.311 "subsystem": "nbd", 00:37:29.311 "config": [] 00:37:29.311 }, 00:37:29.311 { 00:37:29.311 "subsystem": "scheduler", 00:37:29.311 "config": [ 00:37:29.311 { 00:37:29.311 "method": "framework_set_scheduler", 00:37:29.311 "params": { 00:37:29.311 "name": "static" 00:37:29.311 } 00:37:29.311 } 00:37:29.311 ] 00:37:29.311 }, 00:37:29.311 { 00:37:29.311 "subsystem": "nvmf", 00:37:29.311 "config": [ 00:37:29.311 { 00:37:29.311 "method": "nvmf_set_config", 00:37:29.311 "params": { 00:37:29.311 "admin_cmd_passthru": { 00:37:29.311 "identify_ctrlr": false 00:37:29.311 }, 00:37:29.311 "dhchap_dhgroups": [ 00:37:29.311 "null", 00:37:29.311 "ffdhe2048", 00:37:29.311 "ffdhe3072", 00:37:29.311 "ffdhe4096", 00:37:29.311 "ffdhe6144", 00:37:29.311 "ffdhe8192" 00:37:29.311 ], 00:37:29.311 "dhchap_digests": [ 00:37:29.311 "sha256", 00:37:29.311 "sha384", 00:37:29.311 "sha512" 00:37:29.311 ], 00:37:29.311 "discovery_filter": "match_any" 00:37:29.311 } 00:37:29.311 }, 00:37:29.311 { 00:37:29.311 "method": "nvmf_set_max_subsystems", 00:37:29.311 "params": { 00:37:29.311 "max_subsystems": 1024 00:37:29.311 } 00:37:29.311 }, 00:37:29.311 { 00:37:29.311 "method": "nvmf_set_crdt", 00:37:29.311 "params": { 00:37:29.311 "crdt1": 0, 00:37:29.311 "crdt2": 0, 00:37:29.311 "crdt3": 0 00:37:29.311 } 00:37:29.311 }, 00:37:29.311 { 00:37:29.311 "method": "nvmf_create_transport", 00:37:29.311 "params": { 00:37:29.311 "abort_timeout_sec": 1, 00:37:29.311 "ack_timeout": 0, 00:37:29.311 "buf_cache_size": 4294967295, 00:37:29.311 "c2h_success": false, 00:37:29.311 "data_wr_pool_size": 0, 00:37:29.311 "dif_insert_or_strip": false, 00:37:29.311 "in_capsule_data_size": 4096, 00:37:29.311 "io_unit_size": 131072, 00:37:29.311 "max_aq_depth": 128, 00:37:29.311 "max_io_qpairs_per_ctrlr": 127, 00:37:29.311 "max_io_size": 131072, 00:37:29.311 "max_queue_depth": 128, 00:37:29.311 "num_shared_buffers": 511, 00:37:29.311 "sock_priority": 0, 00:37:29.311 "trtype": "TCP", 00:37:29.311 "zcopy": false 00:37:29.311 } 00:37:29.311 }, 00:37:29.311 { 00:37:29.311 "method": "nvmf_create_subsystem", 00:37:29.311 "params": { 00:37:29.311 "allow_any_host": false, 00:37:29.311 "ana_reporting": false, 00:37:29.311 "max_cntlid": 65519, 00:37:29.311 "max_namespaces": 10, 00:37:29.311 "min_cntlid": 1, 00:37:29.311 "model_number": "SPDK bdev Controller", 00:37:29.311 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:37:29.311 "serial_number": "SPDK00000000000001" 00:37:29.312 } 00:37:29.312 }, 00:37:29.312 { 00:37:29.312 "method": "nvmf_subsystem_add_host", 00:37:29.312 "params": { 00:37:29.312 "host": "nqn.2016-06.io.spdk:host1", 00:37:29.312 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:37:29.312 "psk": "key0" 00:37:29.312 } 00:37:29.312 }, 00:37:29.312 { 00:37:29.312 "method": "nvmf_subsystem_add_ns", 00:37:29.312 "params": { 00:37:29.312 "namespace": { 00:37:29.312 "bdev_name": "malloc0", 00:37:29.312 "nguid": "957204BD9AFA4AA5B50936B0FC1F49B3", 00:37:29.312 "no_auto_visible": false, 00:37:29.312 "nsid": 1, 00:37:29.312 "uuid": "957204bd-9afa-4aa5-b509-36b0fc1f49b3" 00:37:29.312 }, 00:37:29.312 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:37:29.312 } 00:37:29.312 }, 00:37:29.312 { 00:37:29.312 "method": "nvmf_subsystem_add_listener", 00:37:29.312 "params": { 00:37:29.312 "listen_address": { 00:37:29.312 "adrfam": "IPv4", 00:37:29.312 "traddr": "10.0.0.3", 00:37:29.312 "trsvcid": "4420", 00:37:29.312 "trtype": "TCP" 00:37:29.312 }, 00:37:29.312 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:37:29.312 "secure_channel": true 00:37:29.312 } 00:37:29.312 } 00:37:29.312 ] 00:37:29.312 } 00:37:29.312 ] 00:37:29.312 }' 00:37:29.312 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=83743 00:37:29.312 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:37:29.312 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 83743 00:37:29.312 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83743 ']' 00:37:29.312 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:29.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:29.312 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:29.312 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:29.312 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:29.312 20:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:29.312 [2024-10-01 20:36:24.539671] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:37:29.312 [2024-10-01 20:36:24.539938] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:29.570 [2024-10-01 20:36:24.672180] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:29.570 [2024-10-01 20:36:24.721686] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:29.570 [2024-10-01 20:36:24.721749] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:29.570 [2024-10-01 20:36:24.721758] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:29.570 [2024-10-01 20:36:24.721765] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:29.570 [2024-10-01 20:36:24.721771] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:29.570 [2024-10-01 20:36:24.721849] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:29.827 [2024-10-01 20:36:24.957329] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:29.827 [2024-10-01 20:36:24.989278] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:29.827 [2024-10-01 20:36:24.989474] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:37:30.394 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:30.394 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:37:30.394 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:30.394 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:30.394 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:30.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:30.394 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:30.394 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=83787 00:37:30.394 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 83787 /var/tmp/bdevperf.sock 00:37:30.394 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83787 ']' 00:37:30.394 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:30.394 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:30.394 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:37:30.394 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:30.394 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:30.394 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:37:30.394 "subsystems": [ 00:37:30.394 { 00:37:30.394 "subsystem": "keyring", 00:37:30.394 "config": [ 00:37:30.394 { 00:37:30.394 "method": "keyring_file_add_key", 00:37:30.394 "params": { 00:37:30.394 "name": "key0", 00:37:30.394 "path": "/tmp/tmp.gTlJJjye3R" 00:37:30.394 } 00:37:30.394 } 00:37:30.394 ] 00:37:30.394 }, 00:37:30.394 { 00:37:30.394 "subsystem": "iobuf", 00:37:30.394 "config": [ 00:37:30.394 { 00:37:30.394 "method": "iobuf_set_options", 00:37:30.394 "params": { 00:37:30.394 "large_bufsize": 135168, 00:37:30.394 "large_pool_count": 1024, 00:37:30.394 "small_bufsize": 8192, 00:37:30.394 "small_pool_count": 8192 00:37:30.394 } 00:37:30.394 } 00:37:30.394 ] 00:37:30.394 }, 00:37:30.394 { 00:37:30.394 "subsystem": "sock", 00:37:30.394 "config": [ 00:37:30.394 { 00:37:30.394 "method": "sock_set_default_impl", 00:37:30.394 "params": { 00:37:30.394 "impl_name": "posix" 00:37:30.394 } 00:37:30.394 }, 00:37:30.394 { 00:37:30.394 "method": "sock_impl_set_options", 00:37:30.394 "params": { 00:37:30.394 "enable_ktls": false, 00:37:30.394 "enable_placement_id": 0, 00:37:30.394 "enable_quickack": false, 00:37:30.394 "enable_recv_pipe": true, 00:37:30.394 "enable_zerocopy_send_client": false, 00:37:30.394 "enable_zerocopy_send_server": true, 00:37:30.394 "impl_name": "ssl", 00:37:30.394 "recv_buf_size": 4096, 00:37:30.394 "send_buf_size": 4096, 00:37:30.394 "tls_version": 0, 00:37:30.394 "zerocopy_threshold": 0 00:37:30.394 } 00:37:30.394 }, 00:37:30.394 { 00:37:30.394 "method": "sock_impl_set_options", 00:37:30.394 "params": { 00:37:30.394 "enable_ktls": false, 00:37:30.394 "enable_placement_id": 0, 00:37:30.394 "enable_quickack": false, 00:37:30.394 "enable_recv_pipe": true, 00:37:30.394 "enable_zerocopy_send_client": false, 00:37:30.394 "enable_zerocopy_send_server": true, 00:37:30.394 "impl_name": "posix", 00:37:30.394 "recv_buf_size": 2097152, 00:37:30.394 "send_buf_size": 2097152, 00:37:30.394 "tls_version": 0, 00:37:30.394 "zerocopy_threshold": 0 00:37:30.394 } 00:37:30.394 } 00:37:30.394 ] 00:37:30.394 }, 00:37:30.394 { 00:37:30.394 "subsystem": "vmd", 00:37:30.394 "config": [] 00:37:30.394 }, 00:37:30.394 { 00:37:30.394 "subsystem": "accel", 00:37:30.394 "config": [ 00:37:30.394 { 00:37:30.394 "method": "accel_set_options", 00:37:30.394 "params": { 00:37:30.394 "buf_count": 2048, 00:37:30.394 "large_cache_size": 16, 00:37:30.394 "sequence_count": 2048, 00:37:30.394 "small_cache_size": 128, 00:37:30.394 "task_count": 2048 00:37:30.394 } 00:37:30.394 } 00:37:30.395 ] 00:37:30.395 }, 00:37:30.395 { 00:37:30.395 "subsystem": "bdev", 00:37:30.395 "config": [ 00:37:30.395 { 00:37:30.395 "method": "bdev_set_options", 00:37:30.395 "params": { 00:37:30.395 "bdev_auto_examine": true, 00:37:30.395 "bdev_io_cache_size": 256, 00:37:30.395 "bdev_io_pool_size": 65535, 00:37:30.395 "bdev_io_stack_size": 4096, 00:37:30.395 "iobuf_large_cache_size": 16, 00:37:30.395 "iobuf_small_cache_size": 128 00:37:30.395 } 00:37:30.395 }, 00:37:30.395 { 00:37:30.395 "method": "bdev_raid_set_options", 00:37:30.395 "params": { 00:37:30.395 "process_max_bandwidth_mb_sec": 0, 00:37:30.395 "process_window_size_kb": 1024 00:37:30.395 } 00:37:30.395 }, 00:37:30.395 { 00:37:30.395 "method": "bdev_iscsi_set_options", 00:37:30.395 "params": { 00:37:30.395 "timeout_sec": 30 00:37:30.395 } 00:37:30.395 }, 00:37:30.395 { 00:37:30.395 "method": "bdev_nvme_set_options", 00:37:30.395 "params": { 00:37:30.395 "action_on_timeout": "none", 00:37:30.395 "allow_accel_sequence": false, 00:37:30.395 "arbitration_burst": 0, 00:37:30.395 "bdev_retry_count": 3, 00:37:30.395 "ctrlr_loss_timeout_sec": 0, 00:37:30.395 "delay_cmd_submit": true, 00:37:30.395 "dhchap_dhgroups": [ 00:37:30.395 "null", 00:37:30.395 "ffdhe2048", 00:37:30.395 "ffdhe3072", 00:37:30.395 "ffdhe4096", 00:37:30.395 "ffdhe6144", 00:37:30.395 "ffdhe8192" 00:37:30.395 ], 00:37:30.395 "dhchap_digests": [ 00:37:30.395 "sha256", 00:37:30.395 "sha384", 00:37:30.395 "sha512" 00:37:30.395 ], 00:37:30.395 "disable_auto_failback": false, 00:37:30.395 "fast_io_fail_timeout_sec": 0, 00:37:30.395 "generate_uuids": false, 00:37:30.395 "high_priority_weight": 0, 00:37:30.395 "io_path_stat": false, 00:37:30.395 "io_queue_requests": 512, 00:37:30.395 "keep_alive_timeout_ms": 10000, 00:37:30.395 "low_priority_weight": 0, 00:37:30.395 "medium_priority_weight": 0, 00:37:30.395 "nvme_adminq_poll_period_us": 10000, 00:37:30.395 "nvme_error_stat": false, 00:37:30.395 "nvme_ioq_poll_period_us": 0, 00:37:30.395 "rdma_cm_event_timeout_ms": 0, 00:37:30.395 "rdma_max_cq_size": 0, 00:37:30.395 "rdma_srq_size": 0, 00:37:30.395 "reconnect_delay_sec": 0, 00:37:30.395 "timeout_admin_us": 0, 00:37:30.395 "timeout_us": 0, 00:37:30.395 "transport_ack_timeout": 0, 00:37:30.395 "transport_retry_count": 4, 00:37:30.395 "transport_tos": 0 00:37:30.395 } 00:37:30.395 }, 00:37:30.395 { 00:37:30.395 "method": "bdev_nvme_attach_controller", 00:37:30.395 "params": { 00:37:30.395 "adrfam": "IPv4", 00:37:30.395 "ctrlr_loss_timeout_sec": 0, 00:37:30.395 "ddgst": false, 00:37:30.395 "fast_io_fail_timeout_sec": 0, 00:37:30.395 "hdgst": false, 00:37:30.395 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:30.395 "name": "TLSTEST", 00:37:30.395 "prchk_guard": false, 00:37:30.395 "prchk_reftag": false, 00:37:30.395 "psk": "key0", 00:37:30.395 "reconnect_delay_sec": 0, 00:37:30.395 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:30.395 "traddr": "10.0.0.3", 00:37:30.395 "trsvcid": "4420", 00:37:30.395 "trtype": "TCP" 00:37:30.395 } 00:37:30.395 }, 00:37:30.395 { 00:37:30.395 "method": "bdev_nvme_set_hotplug", 00:37:30.395 "params": { 00:37:30.395 "enable": false, 00:37:30.395 "period_us": 100000 00:37:30.395 } 00:37:30.395 }, 00:37:30.395 { 00:37:30.395 "method": "bdev_wait_for_examine" 00:37:30.395 } 00:37:30.395 ] 00:37:30.395 }, 00:37:30.395 { 00:37:30.395 "subsystem": "nbd", 00:37:30.395 "config": [] 00:37:30.395 } 00:37:30.395 ] 00:37:30.395 }' 00:37:30.395 20:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:30.395 [2024-10-01 20:36:25.617815] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:37:30.395 [2024-10-01 20:36:25.617896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83787 ] 00:37:30.653 [2024-10-01 20:36:25.752657] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:30.653 [2024-10-01 20:36:25.802560] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:37:30.912 [2024-10-01 20:36:25.971824] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:31.477 20:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:31.477 20:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:37:31.477 20:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:37:31.477 Running I/O for 10 seconds... 00:37:41.692 4736.00 IOPS, 18.50 MiB/s 4736.00 IOPS, 18.50 MiB/s 4766.33 IOPS, 18.62 MiB/s 4780.75 IOPS, 18.67 MiB/s 4792.60 IOPS, 18.72 MiB/s 4796.50 IOPS, 18.74 MiB/s 4796.14 IOPS, 18.73 MiB/s 4804.00 IOPS, 18.77 MiB/s 4811.00 IOPS, 18.79 MiB/s 4814.60 IOPS, 18.81 MiB/s 00:37:41.692 Latency(us) 00:37:41.692 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:41.692 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:37:41.692 Verification LBA range: start 0x0 length 0x2000 00:37:41.692 TLSTESTn1 : 10.01 4820.33 18.83 0.00 0.00 26508.67 4855.62 21448.15 00:37:41.692 =================================================================================================================== 00:37:41.692 Total : 4820.33 18.83 0.00 0.00 26508.67 4855.62 21448.15 00:37:41.692 { 00:37:41.692 "results": [ 00:37:41.692 { 00:37:41.692 "job": "TLSTESTn1", 00:37:41.692 "core_mask": "0x4", 00:37:41.692 "workload": "verify", 00:37:41.692 "status": "finished", 00:37:41.692 "verify_range": { 00:37:41.692 "start": 0, 00:37:41.692 "length": 8192 00:37:41.692 }, 00:37:41.692 "queue_depth": 128, 00:37:41.692 "io_size": 4096, 00:37:41.692 "runtime": 10.014246, 00:37:41.692 "iops": 4820.332953674196, 00:37:41.692 "mibps": 18.829425600289827, 00:37:41.692 "io_failed": 0, 00:37:41.692 "io_timeout": 0, 00:37:41.692 "avg_latency_us": 26508.672489830355, 00:37:41.692 "min_latency_us": 4855.6218181818185, 00:37:41.692 "max_latency_us": 21448.145454545454 00:37:41.692 } 00:37:41.692 ], 00:37:41.692 "core_count": 1 00:37:41.692 } 00:37:41.692 20:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:41.692 20:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 83787 00:37:41.692 20:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83787 ']' 00:37:41.693 20:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83787 00:37:41.693 20:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:37:41.693 20:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:41.693 20:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83787 00:37:41.693 killing process with pid 83787 00:37:41.693 Received shutdown signal, test time was about 10.000000 seconds 00:37:41.693 00:37:41.693 Latency(us) 00:37:41.693 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:41.693 =================================================================================================================== 00:37:41.693 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:41.693 20:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:37:41.693 20:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:37:41.693 20:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83787' 00:37:41.693 20:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83787 00:37:41.693 20:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83787 00:37:41.693 20:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 83743 00:37:41.693 20:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83743 ']' 00:37:41.693 20:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83743 00:37:41.693 20:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:37:41.955 20:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:41.955 20:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83743 00:37:41.955 killing process with pid 83743 00:37:41.955 20:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:41.955 20:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:41.955 20:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83743' 00:37:41.955 20:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83743 00:37:41.955 20:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83743 00:37:41.955 20:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:37:41.955 20:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:41.955 20:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:41.955 20:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:41.955 20:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:37:41.955 20:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=83943 00:37:41.955 20:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 83943 00:37:41.955 20:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83943 ']' 00:37:41.955 20:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:41.955 20:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:41.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:41.955 20:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:41.955 20:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:41.955 20:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:42.214 [2024-10-01 20:36:37.231746] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:37:42.215 [2024-10-01 20:36:37.231849] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:42.215 [2024-10-01 20:36:37.367550] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:42.215 [2024-10-01 20:36:37.436756] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:42.215 [2024-10-01 20:36:37.436829] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:42.215 [2024-10-01 20:36:37.436847] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:42.215 [2024-10-01 20:36:37.436857] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:42.215 [2024-10-01 20:36:37.436866] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:42.215 [2024-10-01 20:36:37.436905] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:42.474 20:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:42.474 20:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:37:42.474 20:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:42.474 20:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:42.474 20:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:42.474 20:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:42.474 20:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.gTlJJjye3R 00:37:42.474 20:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.gTlJJjye3R 00:37:42.474 20:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:37:42.734 [2024-10-01 20:36:37.897905] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:42.734 20:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:37:42.993 20:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:37:43.252 [2024-10-01 20:36:38.406003] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:43.252 [2024-10-01 20:36:38.406262] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:37:43.252 20:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:37:43.511 malloc0 00:37:43.511 20:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:37:43.770 20:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.gTlJJjye3R 00:37:44.027 20:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:37:44.286 20:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=84038 00:37:44.286 20:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:37:44.286 20:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:44.286 20:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 84038 /var/tmp/bdevperf.sock 00:37:44.286 20:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84038 ']' 00:37:44.286 20:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:44.286 20:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:44.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:44.286 20:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:44.286 20:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:44.286 20:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:44.286 [2024-10-01 20:36:39.414554] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:37:44.286 [2024-10-01 20:36:39.414661] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84038 ] 00:37:44.557 [2024-10-01 20:36:39.543004] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:44.557 [2024-10-01 20:36:39.597201] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:44.557 20:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:44.557 20:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:37:44.557 20:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gTlJJjye3R 00:37:44.816 20:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:37:45.075 [2024-10-01 20:36:40.162672] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:45.075 nvme0n1 00:37:45.075 20:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:45.334 Running I/O for 1 seconds... 00:37:46.271 4736.00 IOPS, 18.50 MiB/s 00:37:46.271 Latency(us) 00:37:46.271 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:46.271 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:37:46.271 Verification LBA range: start 0x0 length 0x2000 00:37:46.271 nvme0n1 : 1.02 4765.50 18.62 0.00 0.00 26605.02 6166.34 16562.73 00:37:46.271 =================================================================================================================== 00:37:46.271 Total : 4765.50 18.62 0.00 0.00 26605.02 6166.34 16562.73 00:37:46.271 { 00:37:46.271 "results": [ 00:37:46.271 { 00:37:46.271 "job": "nvme0n1", 00:37:46.271 "core_mask": "0x2", 00:37:46.271 "workload": "verify", 00:37:46.271 "status": "finished", 00:37:46.271 "verify_range": { 00:37:46.271 "start": 0, 00:37:46.271 "length": 8192 00:37:46.271 }, 00:37:46.271 "queue_depth": 128, 00:37:46.271 "io_size": 4096, 00:37:46.271 "runtime": 1.02067, 00:37:46.271 "iops": 4765.4971734252995, 00:37:46.271 "mibps": 18.615223333692576, 00:37:46.271 "io_failed": 0, 00:37:46.271 "io_timeout": 0, 00:37:46.271 "avg_latency_us": 26605.017416267943, 00:37:46.271 "min_latency_us": 6166.341818181818, 00:37:46.271 "max_latency_us": 16562.734545454547 00:37:46.271 } 00:37:46.271 ], 00:37:46.271 "core_count": 1 00:37:46.271 } 00:37:46.271 20:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 84038 00:37:46.271 20:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84038 ']' 00:37:46.271 20:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84038 00:37:46.271 20:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:37:46.271 20:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:46.271 20:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84038 00:37:46.271 20:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:46.271 20:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:46.271 killing process with pid 84038 00:37:46.271 20:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84038' 00:37:46.271 Received shutdown signal, test time was about 1.000000 seconds 00:37:46.271 00:37:46.271 Latency(us) 00:37:46.271 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:46.271 =================================================================================================================== 00:37:46.271 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:46.271 20:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84038 00:37:46.271 20:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84038 00:37:46.533 20:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 83943 00:37:46.533 20:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83943 ']' 00:37:46.533 20:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83943 00:37:46.533 20:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:37:46.533 20:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:46.533 20:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83943 00:37:46.533 killing process with pid 83943 00:37:46.533 20:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:46.533 20:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:46.533 20:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83943' 00:37:46.533 20:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83943 00:37:46.533 20:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83943 00:37:46.792 20:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:37:46.792 20:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:46.792 20:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:46.792 20:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:46.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:46.792 20:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=84097 00:37:46.792 20:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:37:46.792 20:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 84097 00:37:46.792 20:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84097 ']' 00:37:46.792 20:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:46.792 20:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:46.792 20:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:46.792 20:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:46.792 20:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:46.792 [2024-10-01 20:36:41.946697] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:37:46.792 [2024-10-01 20:36:41.946927] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:47.051 [2024-10-01 20:36:42.079558] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:47.051 [2024-10-01 20:36:42.127393] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:47.051 [2024-10-01 20:36:42.127669] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:47.051 [2024-10-01 20:36:42.127879] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:47.051 [2024-10-01 20:36:42.128075] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:47.051 [2024-10-01 20:36:42.128231] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:47.051 [2024-10-01 20:36:42.128376] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:47.051 20:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:47.051 20:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:37:47.051 20:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:47.051 20:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:47.051 20:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:47.051 20:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:47.051 20:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:37:47.051 20:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:47.051 20:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:47.311 [2024-10-01 20:36:42.304576] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:47.311 malloc0 00:37:47.311 [2024-10-01 20:36:42.330045] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:47.311 [2024-10-01 20:36:42.330289] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:37:47.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:47.311 20:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:47.311 20:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=84134 00:37:47.311 20:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:37:47.311 20:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 84134 /var/tmp/bdevperf.sock 00:37:47.311 20:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84134 ']' 00:37:47.311 20:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:47.311 20:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:47.311 20:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:47.311 20:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:47.311 20:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:47.311 [2024-10-01 20:36:42.418131] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:37:47.311 [2024-10-01 20:36:42.418409] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84134 ] 00:37:47.311 [2024-10-01 20:36:42.557316] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:47.570 [2024-10-01 20:36:42.610249] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:47.570 20:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:47.570 20:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:37:47.570 20:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gTlJJjye3R 00:37:47.829 20:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:37:48.088 [2024-10-01 20:36:43.287903] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:48.347 nvme0n1 00:37:48.347 20:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:48.347 Running I/O for 1 seconds... 00:37:49.284 4864.00 IOPS, 19.00 MiB/s 00:37:49.284 Latency(us) 00:37:49.284 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:49.284 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:37:49.284 Verification LBA range: start 0x0 length 0x2000 00:37:49.284 nvme0n1 : 1.02 4878.41 19.06 0.00 0.00 25998.17 6017.40 17396.83 00:37:49.284 =================================================================================================================== 00:37:49.284 Total : 4878.41 19.06 0.00 0.00 25998.17 6017.40 17396.83 00:37:49.284 { 00:37:49.284 "results": [ 00:37:49.284 { 00:37:49.284 "job": "nvme0n1", 00:37:49.284 "core_mask": "0x2", 00:37:49.284 "workload": "verify", 00:37:49.284 "status": "finished", 00:37:49.284 "verify_range": { 00:37:49.284 "start": 0, 00:37:49.284 "length": 8192 00:37:49.284 }, 00:37:49.284 "queue_depth": 128, 00:37:49.284 "io_size": 4096, 00:37:49.284 "runtime": 1.023285, 00:37:49.284 "iops": 4878.406309092775, 00:37:49.284 "mibps": 19.05627464489365, 00:37:49.284 "io_failed": 0, 00:37:49.284 "io_timeout": 0, 00:37:49.284 "avg_latency_us": 25998.166526806526, 00:37:49.284 "min_latency_us": 6017.396363636363, 00:37:49.284 "max_latency_us": 17396.82909090909 00:37:49.284 } 00:37:49.284 ], 00:37:49.284 "core_count": 1 00:37:49.284 } 00:37:49.543 20:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:37:49.543 20:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:49.543 20:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:49.543 20:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:49.543 20:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:37:49.543 "subsystems": [ 00:37:49.543 { 00:37:49.543 "subsystem": "keyring", 00:37:49.543 "config": [ 00:37:49.543 { 00:37:49.543 "method": "keyring_file_add_key", 00:37:49.543 "params": { 00:37:49.543 "name": "key0", 00:37:49.543 "path": "/tmp/tmp.gTlJJjye3R" 00:37:49.543 } 00:37:49.543 } 00:37:49.543 ] 00:37:49.543 }, 00:37:49.543 { 00:37:49.543 "subsystem": "iobuf", 00:37:49.543 "config": [ 00:37:49.543 { 00:37:49.543 "method": "iobuf_set_options", 00:37:49.543 "params": { 00:37:49.543 "large_bufsize": 135168, 00:37:49.543 "large_pool_count": 1024, 00:37:49.543 "small_bufsize": 8192, 00:37:49.543 "small_pool_count": 8192 00:37:49.543 } 00:37:49.543 } 00:37:49.543 ] 00:37:49.543 }, 00:37:49.543 { 00:37:49.543 "subsystem": "sock", 00:37:49.543 "config": [ 00:37:49.543 { 00:37:49.543 "method": "sock_set_default_impl", 00:37:49.543 "params": { 00:37:49.543 "impl_name": "posix" 00:37:49.544 } 00:37:49.544 }, 00:37:49.544 { 00:37:49.544 "method": "sock_impl_set_options", 00:37:49.544 "params": { 00:37:49.544 "enable_ktls": false, 00:37:49.544 "enable_placement_id": 0, 00:37:49.544 "enable_quickack": false, 00:37:49.544 "enable_recv_pipe": true, 00:37:49.544 "enable_zerocopy_send_client": false, 00:37:49.544 "enable_zerocopy_send_server": true, 00:37:49.544 "impl_name": "ssl", 00:37:49.544 "recv_buf_size": 4096, 00:37:49.544 "send_buf_size": 4096, 00:37:49.544 "tls_version": 0, 00:37:49.544 "zerocopy_threshold": 0 00:37:49.544 } 00:37:49.544 }, 00:37:49.544 { 00:37:49.544 "method": "sock_impl_set_options", 00:37:49.544 "params": { 00:37:49.544 "enable_ktls": false, 00:37:49.544 "enable_placement_id": 0, 00:37:49.544 "enable_quickack": false, 00:37:49.544 "enable_recv_pipe": true, 00:37:49.544 "enable_zerocopy_send_client": false, 00:37:49.544 "enable_zerocopy_send_server": true, 00:37:49.544 "impl_name": "posix", 00:37:49.544 "recv_buf_size": 2097152, 00:37:49.544 "send_buf_size": 2097152, 00:37:49.544 "tls_version": 0, 00:37:49.544 "zerocopy_threshold": 0 00:37:49.544 } 00:37:49.544 } 00:37:49.544 ] 00:37:49.544 }, 00:37:49.544 { 00:37:49.544 "subsystem": "vmd", 00:37:49.544 "config": [] 00:37:49.544 }, 00:37:49.544 { 00:37:49.544 "subsystem": "accel", 00:37:49.544 "config": [ 00:37:49.544 { 00:37:49.544 "method": "accel_set_options", 00:37:49.544 "params": { 00:37:49.544 "buf_count": 2048, 00:37:49.544 "large_cache_size": 16, 00:37:49.544 "sequence_count": 2048, 00:37:49.544 "small_cache_size": 128, 00:37:49.544 "task_count": 2048 00:37:49.544 } 00:37:49.544 } 00:37:49.544 ] 00:37:49.544 }, 00:37:49.544 { 00:37:49.544 "subsystem": "bdev", 00:37:49.544 "config": [ 00:37:49.544 { 00:37:49.544 "method": "bdev_set_options", 00:37:49.544 "params": { 00:37:49.544 "bdev_auto_examine": true, 00:37:49.544 "bdev_io_cache_size": 256, 00:37:49.544 "bdev_io_pool_size": 65535, 00:37:49.544 "bdev_io_stack_size": 4096, 00:37:49.544 "iobuf_large_cache_size": 16, 00:37:49.544 "iobuf_small_cache_size": 128 00:37:49.544 } 00:37:49.544 }, 00:37:49.544 { 00:37:49.544 "method": "bdev_raid_set_options", 00:37:49.544 "params": { 00:37:49.544 "process_max_bandwidth_mb_sec": 0, 00:37:49.544 "process_window_size_kb": 1024 00:37:49.544 } 00:37:49.544 }, 00:37:49.544 { 00:37:49.544 "method": "bdev_iscsi_set_options", 00:37:49.544 "params": { 00:37:49.544 "timeout_sec": 30 00:37:49.544 } 00:37:49.544 }, 00:37:49.544 { 00:37:49.544 "method": "bdev_nvme_set_options", 00:37:49.544 "params": { 00:37:49.544 "action_on_timeout": "none", 00:37:49.544 "allow_accel_sequence": false, 00:37:49.544 "arbitration_burst": 0, 00:37:49.544 "bdev_retry_count": 3, 00:37:49.544 "ctrlr_loss_timeout_sec": 0, 00:37:49.544 "delay_cmd_submit": true, 00:37:49.544 "dhchap_dhgroups": [ 00:37:49.544 "null", 00:37:49.544 "ffdhe2048", 00:37:49.544 "ffdhe3072", 00:37:49.544 "ffdhe4096", 00:37:49.544 "ffdhe6144", 00:37:49.544 "ffdhe8192" 00:37:49.544 ], 00:37:49.544 "dhchap_digests": [ 00:37:49.544 "sha256", 00:37:49.544 "sha384", 00:37:49.544 "sha512" 00:37:49.544 ], 00:37:49.544 "disable_auto_failback": false, 00:37:49.544 "fast_io_fail_timeout_sec": 0, 00:37:49.544 "generate_uuids": false, 00:37:49.544 "high_priority_weight": 0, 00:37:49.544 "io_path_stat": false, 00:37:49.544 "io_queue_requests": 0, 00:37:49.544 "keep_alive_timeout_ms": 10000, 00:37:49.544 "low_priority_weight": 0, 00:37:49.544 "medium_priority_weight": 0, 00:37:49.544 "nvme_adminq_poll_period_us": 10000, 00:37:49.544 "nvme_error_stat": false, 00:37:49.544 "nvme_ioq_poll_period_us": 0, 00:37:49.544 "rdma_cm_event_timeout_ms": 0, 00:37:49.544 "rdma_max_cq_size": 0, 00:37:49.544 "rdma_srq_size": 0, 00:37:49.544 "reconnect_delay_sec": 0, 00:37:49.544 "timeout_admin_us": 0, 00:37:49.544 "timeout_us": 0, 00:37:49.544 "transport_ack_timeout": 0, 00:37:49.544 "transport_retry_count": 4, 00:37:49.544 "transport_tos": 0 00:37:49.544 } 00:37:49.544 }, 00:37:49.544 { 00:37:49.544 "method": "bdev_nvme_set_hotplug", 00:37:49.544 "params": { 00:37:49.544 "enable": false, 00:37:49.544 "period_us": 100000 00:37:49.544 } 00:37:49.544 }, 00:37:49.544 { 00:37:49.544 "method": "bdev_malloc_create", 00:37:49.544 "params": { 00:37:49.544 "block_size": 4096, 00:37:49.544 "dif_is_head_of_md": false, 00:37:49.544 "dif_pi_format": 0, 00:37:49.544 "dif_type": 0, 00:37:49.544 "md_size": 0, 00:37:49.544 "name": "malloc0", 00:37:49.544 "num_blocks": 8192, 00:37:49.544 "optimal_io_boundary": 0, 00:37:49.544 "physical_block_size": 4096, 00:37:49.544 "uuid": "b75e54ee-9447-41fb-9562-9fb2e895d6cf" 00:37:49.544 } 00:37:49.544 }, 00:37:49.544 { 00:37:49.544 "method": "bdev_wait_for_examine" 00:37:49.544 } 00:37:49.544 ] 00:37:49.544 }, 00:37:49.544 { 00:37:49.544 "subsystem": "nbd", 00:37:49.544 "config": [] 00:37:49.544 }, 00:37:49.544 { 00:37:49.544 "subsystem": "scheduler", 00:37:49.544 "config": [ 00:37:49.544 { 00:37:49.544 "method": "framework_set_scheduler", 00:37:49.544 "params": { 00:37:49.544 "name": "static" 00:37:49.544 } 00:37:49.544 } 00:37:49.544 ] 00:37:49.544 }, 00:37:49.544 { 00:37:49.544 "subsystem": "nvmf", 00:37:49.544 "config": [ 00:37:49.544 { 00:37:49.544 "method": "nvmf_set_config", 00:37:49.544 "params": { 00:37:49.544 "admin_cmd_passthru": { 00:37:49.544 "identify_ctrlr": false 00:37:49.544 }, 00:37:49.544 "dhchap_dhgroups": [ 00:37:49.544 "null", 00:37:49.544 "ffdhe2048", 00:37:49.544 "ffdhe3072", 00:37:49.544 "ffdhe4096", 00:37:49.544 "ffdhe6144", 00:37:49.544 "ffdhe8192" 00:37:49.544 ], 00:37:49.544 "dhchap_digests": [ 00:37:49.544 "sha256", 00:37:49.544 "sha384", 00:37:49.544 "sha512" 00:37:49.544 ], 00:37:49.544 "discovery_filter": "match_any" 00:37:49.544 } 00:37:49.544 }, 00:37:49.544 { 00:37:49.544 "method": "nvmf_set_max_subsystems", 00:37:49.544 "params": { 00:37:49.544 "max_subsystems": 1024 00:37:49.544 } 00:37:49.544 }, 00:37:49.544 { 00:37:49.544 "method": "nvmf_set_crdt", 00:37:49.544 "params": { 00:37:49.544 "crdt1": 0, 00:37:49.544 "crdt2": 0, 00:37:49.544 "crdt3": 0 00:37:49.544 } 00:37:49.544 }, 00:37:49.544 { 00:37:49.544 "method": "nvmf_create_transport", 00:37:49.544 "params": { 00:37:49.544 "abort_timeout_sec": 1, 00:37:49.544 "ack_timeout": 0, 00:37:49.544 "buf_cache_size": 4294967295, 00:37:49.544 "c2h_success": false, 00:37:49.544 "data_wr_pool_size": 0, 00:37:49.544 "dif_insert_or_strip": false, 00:37:49.544 "in_capsule_data_size": 4096, 00:37:49.544 "io_unit_size": 131072, 00:37:49.544 "max_aq_depth": 128, 00:37:49.544 "max_io_qpairs_per_ctrlr": 127, 00:37:49.544 "max_io_size": 131072, 00:37:49.544 "max_queue_depth": 128, 00:37:49.544 "num_shared_buffers": 511, 00:37:49.544 "sock_priority": 0, 00:37:49.544 "trtype": "TCP", 00:37:49.544 "zcopy": false 00:37:49.544 } 00:37:49.544 }, 00:37:49.544 { 00:37:49.544 "method": "nvmf_create_subsystem", 00:37:49.544 "params": { 00:37:49.544 "allow_any_host": false, 00:37:49.544 "ana_reporting": false, 00:37:49.544 "max_cntlid": 65519, 00:37:49.544 "max_namespaces": 32, 00:37:49.544 "min_cntlid": 1, 00:37:49.544 "model_number": "SPDK bdev Controller", 00:37:49.544 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:37:49.544 "serial_number": "00000000000000000000" 00:37:49.544 } 00:37:49.544 }, 00:37:49.544 { 00:37:49.544 "method": "nvmf_subsystem_add_host", 00:37:49.545 "params": { 00:37:49.545 "host": "nqn.2016-06.io.spdk:host1", 00:37:49.545 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:37:49.545 "psk": "key0" 00:37:49.545 } 00:37:49.545 }, 00:37:49.545 { 00:37:49.545 "method": "nvmf_subsystem_add_ns", 00:37:49.545 "params": { 00:37:49.545 "namespace": { 00:37:49.545 "bdev_name": "malloc0", 00:37:49.545 "nguid": "B75E54EE944741FB95629FB2E895D6CF", 00:37:49.545 "no_auto_visible": false, 00:37:49.545 "nsid": 1, 00:37:49.545 "uuid": "b75e54ee-9447-41fb-9562-9fb2e895d6cf" 00:37:49.545 }, 00:37:49.545 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:37:49.545 } 00:37:49.545 }, 00:37:49.545 { 00:37:49.545 "method": "nvmf_subsystem_add_listener", 00:37:49.545 "params": { 00:37:49.545 "listen_address": { 00:37:49.545 "adrfam": "IPv4", 00:37:49.545 "traddr": "10.0.0.3", 00:37:49.545 "trsvcid": "4420", 00:37:49.545 "trtype": "TCP" 00:37:49.545 }, 00:37:49.545 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:37:49.545 "secure_channel": false, 00:37:49.545 "sock_impl": "ssl" 00:37:49.545 } 00:37:49.545 } 00:37:49.545 ] 00:37:49.545 } 00:37:49.545 ] 00:37:49.545 }' 00:37:49.545 20:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:37:49.804 20:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:37:49.804 "subsystems": [ 00:37:49.804 { 00:37:49.804 "subsystem": "keyring", 00:37:49.804 "config": [ 00:37:49.804 { 00:37:49.804 "method": "keyring_file_add_key", 00:37:49.804 "params": { 00:37:49.804 "name": "key0", 00:37:49.804 "path": "/tmp/tmp.gTlJJjye3R" 00:37:49.804 } 00:37:49.804 } 00:37:49.804 ] 00:37:49.804 }, 00:37:49.804 { 00:37:49.804 "subsystem": "iobuf", 00:37:49.804 "config": [ 00:37:49.804 { 00:37:49.804 "method": "iobuf_set_options", 00:37:49.804 "params": { 00:37:49.804 "large_bufsize": 135168, 00:37:49.804 "large_pool_count": 1024, 00:37:49.804 "small_bufsize": 8192, 00:37:49.804 "small_pool_count": 8192 00:37:49.804 } 00:37:49.804 } 00:37:49.804 ] 00:37:49.804 }, 00:37:49.804 { 00:37:49.804 "subsystem": "sock", 00:37:49.804 "config": [ 00:37:49.804 { 00:37:49.804 "method": "sock_set_default_impl", 00:37:49.804 "params": { 00:37:49.804 "impl_name": "posix" 00:37:49.804 } 00:37:49.804 }, 00:37:49.804 { 00:37:49.804 "method": "sock_impl_set_options", 00:37:49.804 "params": { 00:37:49.804 "enable_ktls": false, 00:37:49.804 "enable_placement_id": 0, 00:37:49.804 "enable_quickack": false, 00:37:49.804 "enable_recv_pipe": true, 00:37:49.804 "enable_zerocopy_send_client": false, 00:37:49.804 "enable_zerocopy_send_server": true, 00:37:49.804 "impl_name": "ssl", 00:37:49.804 "recv_buf_size": 4096, 00:37:49.804 "send_buf_size": 4096, 00:37:49.804 "tls_version": 0, 00:37:49.804 "zerocopy_threshold": 0 00:37:49.804 } 00:37:49.804 }, 00:37:49.804 { 00:37:49.804 "method": "sock_impl_set_options", 00:37:49.804 "params": { 00:37:49.804 "enable_ktls": false, 00:37:49.804 "enable_placement_id": 0, 00:37:49.804 "enable_quickack": false, 00:37:49.804 "enable_recv_pipe": true, 00:37:49.804 "enable_zerocopy_send_client": false, 00:37:49.804 "enable_zerocopy_send_server": true, 00:37:49.804 "impl_name": "posix", 00:37:49.804 "recv_buf_size": 2097152, 00:37:49.804 "send_buf_size": 2097152, 00:37:49.804 "tls_version": 0, 00:37:49.804 "zerocopy_threshold": 0 00:37:49.804 } 00:37:49.804 } 00:37:49.804 ] 00:37:49.804 }, 00:37:49.804 { 00:37:49.804 "subsystem": "vmd", 00:37:49.805 "config": [] 00:37:49.805 }, 00:37:49.805 { 00:37:49.805 "subsystem": "accel", 00:37:49.805 "config": [ 00:37:49.805 { 00:37:49.805 "method": "accel_set_options", 00:37:49.805 "params": { 00:37:49.805 "buf_count": 2048, 00:37:49.805 "large_cache_size": 16, 00:37:49.805 "sequence_count": 2048, 00:37:49.805 "small_cache_size": 128, 00:37:49.805 "task_count": 2048 00:37:49.805 } 00:37:49.805 } 00:37:49.805 ] 00:37:49.805 }, 00:37:49.805 { 00:37:49.805 "subsystem": "bdev", 00:37:49.805 "config": [ 00:37:49.805 { 00:37:49.805 "method": "bdev_set_options", 00:37:49.805 "params": { 00:37:49.805 "bdev_auto_examine": true, 00:37:49.805 "bdev_io_cache_size": 256, 00:37:49.805 "bdev_io_pool_size": 65535, 00:37:49.805 "bdev_io_stack_size": 4096, 00:37:49.805 "iobuf_large_cache_size": 16, 00:37:49.805 "iobuf_small_cache_size": 128 00:37:49.805 } 00:37:49.805 }, 00:37:49.805 { 00:37:49.805 "method": "bdev_raid_set_options", 00:37:49.805 "params": { 00:37:49.805 "process_max_bandwidth_mb_sec": 0, 00:37:49.805 "process_window_size_kb": 1024 00:37:49.805 } 00:37:49.805 }, 00:37:49.805 { 00:37:49.805 "method": "bdev_iscsi_set_options", 00:37:49.805 "params": { 00:37:49.805 "timeout_sec": 30 00:37:49.805 } 00:37:49.805 }, 00:37:49.805 { 00:37:49.805 "method": "bdev_nvme_set_options", 00:37:49.805 "params": { 00:37:49.805 "action_on_timeout": "none", 00:37:49.805 "allow_accel_sequence": false, 00:37:49.805 "arbitration_burst": 0, 00:37:49.805 "bdev_retry_count": 3, 00:37:49.805 "ctrlr_loss_timeout_sec": 0, 00:37:49.805 "delay_cmd_submit": true, 00:37:49.805 "dhchap_dhgroups": [ 00:37:49.805 "null", 00:37:49.805 "ffdhe2048", 00:37:49.805 "ffdhe3072", 00:37:49.805 "ffdhe4096", 00:37:49.805 "ffdhe6144", 00:37:49.805 "ffdhe8192" 00:37:49.805 ], 00:37:49.805 "dhchap_digests": [ 00:37:49.805 "sha256", 00:37:49.805 "sha384", 00:37:49.805 "sha512" 00:37:49.805 ], 00:37:49.805 "disable_auto_failback": false, 00:37:49.805 "fast_io_fail_timeout_sec": 0, 00:37:49.805 "generate_uuids": false, 00:37:49.805 "high_priority_weight": 0, 00:37:49.805 "io_path_stat": false, 00:37:49.805 "io_queue_requests": 512, 00:37:49.805 "keep_alive_timeout_ms": 10000, 00:37:49.805 "low_priority_weight": 0, 00:37:49.805 "medium_priority_weight": 0, 00:37:49.805 "nvme_adminq_poll_period_us": 10000, 00:37:49.805 "nvme_error_stat": false, 00:37:49.805 "nvme_ioq_poll_period_us": 0, 00:37:49.805 "rdma_cm_event_timeout_ms": 0, 00:37:49.805 "rdma_max_cq_size": 0, 00:37:49.805 "rdma_srq_size": 0, 00:37:49.805 "reconnect_delay_sec": 0, 00:37:49.805 "timeout_admin_us": 0, 00:37:49.805 "timeout_us": 0, 00:37:49.805 "transport_ack_timeout": 0, 00:37:49.805 "transport_retry_count": 4, 00:37:49.805 "transport_tos": 0 00:37:49.805 } 00:37:49.805 }, 00:37:49.805 { 00:37:49.805 "method": "bdev_nvme_attach_controller", 00:37:49.805 "params": { 00:37:49.805 "adrfam": "IPv4", 00:37:49.805 "ctrlr_loss_timeout_sec": 0, 00:37:49.805 "ddgst": false, 00:37:49.805 "fast_io_fail_timeout_sec": 0, 00:37:49.805 "hdgst": false, 00:37:49.805 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:49.805 "name": "nvme0", 00:37:49.805 "prchk_guard": false, 00:37:49.805 "prchk_reftag": false, 00:37:49.805 "psk": "key0", 00:37:49.805 "reconnect_delay_sec": 0, 00:37:49.805 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:49.805 "traddr": "10.0.0.3", 00:37:49.805 "trsvcid": "4420", 00:37:49.805 "trtype": "TCP" 00:37:49.805 } 00:37:49.805 }, 00:37:49.805 { 00:37:49.805 "method": "bdev_nvme_set_hotplug", 00:37:49.805 "params": { 00:37:49.805 "enable": false, 00:37:49.805 "period_us": 100000 00:37:49.805 } 00:37:49.805 }, 00:37:49.805 { 00:37:49.805 "method": "bdev_enable_histogram", 00:37:49.805 "params": { 00:37:49.805 "enable": true, 00:37:49.805 "name": "nvme0n1" 00:37:49.805 } 00:37:49.805 }, 00:37:49.805 { 00:37:49.805 "method": "bdev_wait_for_examine" 00:37:49.805 } 00:37:49.805 ] 00:37:49.805 }, 00:37:49.805 { 00:37:49.805 "subsystem": "nbd", 00:37:49.805 "config": [] 00:37:49.805 } 00:37:49.805 ] 00:37:49.805 }' 00:37:49.805 20:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 84134 00:37:49.805 20:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84134 ']' 00:37:49.805 20:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84134 00:37:49.805 20:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:37:49.805 20:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:49.805 20:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84134 00:37:50.065 killing process with pid 84134 00:37:50.065 Received shutdown signal, test time was about 1.000000 seconds 00:37:50.065 00:37:50.065 Latency(us) 00:37:50.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:50.065 =================================================================================================================== 00:37:50.065 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:50.065 20:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:50.065 20:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:50.065 20:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84134' 00:37:50.065 20:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84134 00:37:50.065 20:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84134 00:37:50.065 20:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 84097 00:37:50.065 20:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84097 ']' 00:37:50.065 20:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84097 00:37:50.065 20:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:37:50.065 20:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:50.065 20:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84097 00:37:50.065 killing process with pid 84097 00:37:50.065 20:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:50.065 20:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:50.065 20:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84097' 00:37:50.065 20:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84097 00:37:50.065 20:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84097 00:37:50.324 20:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:37:50.324 20:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:50.324 20:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:50.324 20:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:37:50.324 "subsystems": [ 00:37:50.324 { 00:37:50.324 "subsystem": "keyring", 00:37:50.324 "config": [ 00:37:50.324 { 00:37:50.324 "method": "keyring_file_add_key", 00:37:50.324 "params": { 00:37:50.324 "name": "key0", 00:37:50.324 "path": "/tmp/tmp.gTlJJjye3R" 00:37:50.324 } 00:37:50.324 } 00:37:50.324 ] 00:37:50.324 }, 00:37:50.324 { 00:37:50.324 "subsystem": "iobuf", 00:37:50.324 "config": [ 00:37:50.324 { 00:37:50.324 "method": "iobuf_set_options", 00:37:50.324 "params": { 00:37:50.324 "large_bufsize": 135168, 00:37:50.324 "large_pool_count": 1024, 00:37:50.324 "small_bufsize": 8192, 00:37:50.324 "small_pool_count": 8192 00:37:50.324 } 00:37:50.324 } 00:37:50.324 ] 00:37:50.324 }, 00:37:50.324 { 00:37:50.324 "subsystem": "sock", 00:37:50.324 "config": [ 00:37:50.324 { 00:37:50.324 "method": "sock_set_default_impl", 00:37:50.324 "params": { 00:37:50.324 "impl_name": "posix" 00:37:50.324 } 00:37:50.324 }, 00:37:50.324 { 00:37:50.324 "method": "sock_impl_set_options", 00:37:50.324 "params": { 00:37:50.324 "enable_ktls": false, 00:37:50.324 "enable_placement_id": 0, 00:37:50.324 "enable_quickack": false, 00:37:50.324 "enable_recv_pipe": true, 00:37:50.324 "enable_zerocopy_send_client": false, 00:37:50.324 "enable_zerocopy_send_server": true, 00:37:50.324 "impl_name": "ssl", 00:37:50.324 "recv_buf_size": 4096, 00:37:50.324 "send_buf_size": 4096, 00:37:50.324 "tls_version": 0, 00:37:50.324 "zerocopy_threshold": 0 00:37:50.324 } 00:37:50.324 }, 00:37:50.324 { 00:37:50.324 "method": "sock_impl_set_options", 00:37:50.324 "params": { 00:37:50.324 "enable_ktls": false, 00:37:50.324 "enable_placement_id": 0, 00:37:50.324 "enable_quickack": false, 00:37:50.324 "enable_recv_pipe": true, 00:37:50.324 "enable_zerocopy_send_client": false, 00:37:50.324 "enable_zerocopy_send_server": true, 00:37:50.324 "impl_name": "posix", 00:37:50.324 "recv_buf_size": 2097152, 00:37:50.324 "send_buf_size": 2097152, 00:37:50.324 "tls_version": 0, 00:37:50.324 "zerocopy_threshold": 0 00:37:50.324 } 00:37:50.324 } 00:37:50.324 ] 00:37:50.324 }, 00:37:50.324 { 00:37:50.324 "subsystem": "vmd", 00:37:50.324 "config": [] 00:37:50.324 }, 00:37:50.324 { 00:37:50.324 "subsystem": "accel", 00:37:50.324 "config": [ 00:37:50.324 { 00:37:50.324 "method": "accel_set_options", 00:37:50.324 "params": { 00:37:50.324 "buf_count": 2048, 00:37:50.324 "large_cache_size": 16, 00:37:50.324 "sequence_count": 2048, 00:37:50.324 "small_cache_size": 128, 00:37:50.324 "task_count": 2048 00:37:50.324 } 00:37:50.324 } 00:37:50.324 ] 00:37:50.324 }, 00:37:50.324 { 00:37:50.324 "subsystem": "bdev", 00:37:50.324 "config": [ 00:37:50.324 { 00:37:50.324 "method": "bdev_set_options", 00:37:50.324 "params": { 00:37:50.324 "bdev_auto_examine": true, 00:37:50.324 "bdev_io_cache_size": 256, 00:37:50.324 "bdev_io_pool_size": 65535, 00:37:50.324 "bdev_io_stack_size": 4096, 00:37:50.324 "iobuf_large_cache_size": 16, 00:37:50.324 "iobuf_small_cache_size": 128 00:37:50.324 } 00:37:50.324 }, 00:37:50.324 { 00:37:50.324 "method": "bdev_raid_set_options", 00:37:50.324 "params": { 00:37:50.324 "process_max_bandwidth_mb_sec": 0, 00:37:50.324 "process_window_size_kb": 1024 00:37:50.324 } 00:37:50.324 }, 00:37:50.324 { 00:37:50.324 "method": "bdev_iscsi_set_options", 00:37:50.324 "params": { 00:37:50.324 "timeout_sec": 30 00:37:50.324 } 00:37:50.324 }, 00:37:50.324 { 00:37:50.325 "method": "bdev_nvme_set_options", 00:37:50.325 "params": { 00:37:50.325 "action_on_timeout": "none", 00:37:50.325 "allow_accel_sequence": false, 00:37:50.325 "arbitration_burst": 0, 00:37:50.325 "bdev_retry_count": 3, 00:37:50.325 "ctrlr_loss_timeout_sec": 0, 00:37:50.325 "delay_cmd_submit": true, 00:37:50.325 "dhchap_dhgroups": [ 00:37:50.325 "null", 00:37:50.325 "ffdhe2048", 00:37:50.325 "ffdhe3072", 00:37:50.325 "ffdhe4096", 00:37:50.325 "ffdhe6144", 00:37:50.325 "ffdhe8192" 00:37:50.325 ], 00:37:50.325 "dhchap_digests": [ 00:37:50.325 "sha256", 00:37:50.325 "sha384", 00:37:50.325 "sha512" 00:37:50.325 ], 00:37:50.325 "disable_auto_failback": false, 00:37:50.325 "fast_io_fail_timeout_sec": 0, 00:37:50.325 "generate_uuids": false, 00:37:50.325 "high_priority_weight": 0, 00:37:50.325 "io_path_stat": false, 00:37:50.325 "io_queue_requests": 0, 00:37:50.325 "keep_alive_timeout_ms": 10000, 00:37:50.325 "low_priority_weight": 0, 00:37:50.325 "medium_priority_weight": 0, 00:37:50.325 "nvme_adminq_poll_period_us": 10000, 00:37:50.325 "nvme_error_stat": false, 00:37:50.325 "nvme_ioq_poll_period_us": 0, 00:37:50.325 "rdma_cm_event_timeout_ms": 0, 00:37:50.325 "rdma_max_cq_size": 0, 00:37:50.325 "rdma_srq_size": 0, 00:37:50.325 "reconnect_delay_sec": 0, 00:37:50.325 "timeout_admin_us": 0, 00:37:50.325 "timeout_us": 0, 00:37:50.325 "transport_ack_timeout": 0, 00:37:50.325 "transport_retry_count": 4, 00:37:50.325 "transport_tos": 0 00:37:50.325 } 00:37:50.325 }, 00:37:50.325 { 00:37:50.325 "method": "bdev_nvme_set_hotplug", 00:37:50.325 "params": { 00:37:50.325 "enable": false, 00:37:50.325 "period_us": 100000 00:37:50.325 } 00:37:50.325 }, 00:37:50.325 { 00:37:50.325 "method": "bdev_malloc_create", 00:37:50.325 "params": { 00:37:50.325 "block_size": 4096, 00:37:50.325 "dif_is_head_of_md": false, 00:37:50.325 "dif_pi_format": 0, 00:37:50.325 "dif_type": 0, 00:37:50.325 "md_size": 0, 00:37:50.325 "name": "malloc0", 00:37:50.325 "num_blocks": 8192, 00:37:50.325 "optimal_io_boundary": 0, 00:37:50.325 "physical_block_size": 4096, 00:37:50.325 "uuid": "b75e54ee-9447-41fb-9562-9fb2e895d6cf" 00:37:50.325 } 00:37:50.325 }, 00:37:50.325 { 00:37:50.325 "method": "bdev_wait_for_examine" 00:37:50.325 } 00:37:50.325 ] 00:37:50.325 }, 00:37:50.325 { 00:37:50.325 "subsystem": "nbd", 00:37:50.325 "config": [] 00:37:50.325 }, 00:37:50.325 { 00:37:50.325 "subsystem": "scheduler", 00:37:50.325 "config": [ 00:37:50.325 { 00:37:50.325 "method": "framework_set_scheduler", 00:37:50.325 "params": { 00:37:50.325 "name": "static" 00:37:50.325 } 00:37:50.325 } 00:37:50.325 ] 00:37:50.325 }, 00:37:50.325 { 00:37:50.325 "subsystem": "nvmf", 00:37:50.325 "config": [ 00:37:50.325 { 00:37:50.325 "method": "nvmf_set_config", 00:37:50.325 "params": { 00:37:50.325 "admin_cmd_passthru": { 00:37:50.325 "identify_ctrlr": false 00:37:50.325 }, 00:37:50.325 "dhchap_dhgroups": [ 00:37:50.325 "null", 00:37:50.325 "ffdhe2048", 00:37:50.325 "ffdhe3072", 00:37:50.325 "ffdhe4096", 00:37:50.325 "ffdhe6144", 00:37:50.325 "ffdhe8192" 00:37:50.325 ], 00:37:50.325 "dhchap_digests": [ 00:37:50.325 "sha256", 00:37:50.325 "sha384", 00:37:50.325 "sha512" 00:37:50.325 ], 00:37:50.325 "discovery_filter": "match_any" 00:37:50.325 } 00:37:50.325 }, 00:37:50.325 { 00:37:50.325 "method": "nvmf_set_max_subsystems", 00:37:50.325 "params": { 00:37:50.325 "max_subsystems": 1024 00:37:50.325 } 00:37:50.325 }, 00:37:50.325 { 00:37:50.325 "method": "nvmf_set_crdt", 00:37:50.325 "params": { 00:37:50.325 "crdt1": 0, 00:37:50.325 "crdt2": 0, 00:37:50.325 "crdt3": 0 00:37:50.325 } 00:37:50.325 }, 00:37:50.325 { 00:37:50.325 "method": "nvmf_create_transport", 00:37:50.325 "params": { 00:37:50.325 "abort_timeout_sec": 1, 00:37:50.325 "ack_timeout": 0, 00:37:50.325 "buf_cache_size": 4294967295, 00:37:50.325 "c2h_success": false, 00:37:50.325 "data_wr_pool_size": 0, 00:37:50.325 "dif_insert_or_strip": false, 00:37:50.325 "in_capsule_data_size": 4096, 00:37:50.325 "io_unit_size": 131072, 00:37:50.325 "max_aq_depth": 128, 00:37:50.325 "max_io_qpairs_per_ctrlr": 127, 00:37:50.325 "max_io_size": 131072, 00:37:50.325 "max_queue_depth": 128, 00:37:50.325 "num_shared_buffers": 511, 00:37:50.325 "sock_priority": 0, 00:37:50.325 "trtype": "TCP", 00:37:50.325 "zcopy": false 00:37:50.325 } 00:37:50.325 }, 00:37:50.325 { 00:37:50.325 "method": "nvmf_create_subsystem", 00:37:50.325 "params": { 00:37:50.325 "allow_any_host": false, 00:37:50.325 "ana_reporting": false, 00:37:50.325 "max_cntlid": 65519, 00:37:50.325 "max_namespaces": 32, 00:37:50.325 "min_cntlid": 1, 00:37:50.325 "model_number": "SPDK bdev Controller", 00:37:50.325 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:37:50.325 "serial_number": "00000000000000000000" 00:37:50.325 } 00:37:50.325 }, 00:37:50.325 { 00:37:50.325 "method": "nvmf_subsystem_add_host", 00:37:50.325 "params": { 00:37:50.325 "host": "nqn.2016-06.io.spdk:host1", 00:37:50.325 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:37:50.325 "psk": "key0" 00:37:50.325 } 00:37:50.325 }, 00:37:50.325 { 00:37:50.325 "method": "nvmf_subsystem_add_ns", 00:37:50.325 "params": { 00:37:50.325 "namespace": { 00:37:50.325 "bdev_name": "malloc0", 00:37:50.325 "nguid": "B75E54EE944741FB95629FB2E895D6CF", 00:37:50.325 "no_auto_visible": false, 00:37:50.325 "nsid": 1, 00:37:50.325 "uuid": "b75e54ee-9447-41fb-9562-9fb2e895d6cf" 00:37:50.325 }, 00:37:50.325 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:37:50.325 } 00:37:50.325 }, 00:37:50.325 { 00:37:50.325 "method": "nvmf_subsystem_add_listener", 00:37:50.325 "params": { 00:37:50.325 "listen_address": { 00:37:50.325 "adrfam": "IPv4", 00:37:50.325 "traddr": "10.0.0.3", 00:37:50.325 "trsvcid": "4420", 00:37:50.325 "trtype": "TCP" 00:37:50.325 }, 00:37:50.325 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:37:50.325 "secure_channel": false, 00:37:50.325 "sock_impl": "ssl" 00:37:50.325 } 00:37:50.325 } 00:37:50.325 ] 00:37:50.325 } 00:37:50.325 ] 00:37:50.325 }' 00:37:50.325 20:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:50.325 20:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=84211 00:37:50.325 20:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:37:50.325 20:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 84211 00:37:50.325 20:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84211 ']' 00:37:50.325 20:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:50.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:50.325 20:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:50.325 20:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:50.325 20:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:50.325 20:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:50.325 [2024-10-01 20:36:45.557102] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:37:50.325 [2024-10-01 20:36:45.557841] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:50.584 [2024-10-01 20:36:45.695176] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:50.584 [2024-10-01 20:36:45.743736] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:50.584 [2024-10-01 20:36:45.743803] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:50.584 [2024-10-01 20:36:45.743829] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:50.584 [2024-10-01 20:36:45.743836] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:50.584 [2024-10-01 20:36:45.743842] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:50.584 [2024-10-01 20:36:45.743913] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:50.842 [2024-10-01 20:36:45.970611] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:50.842 [2024-10-01 20:36:46.002570] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:50.842 [2024-10-01 20:36:46.002742] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:37:51.409 20:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:51.409 20:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:37:51.409 20:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:51.409 20:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:51.409 20:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:51.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:51.409 20:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:51.409 20:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=84255 00:37:51.409 20:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 84255 /var/tmp/bdevperf.sock 00:37:51.409 20:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84255 ']' 00:37:51.409 20:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:51.409 20:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:51.409 20:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:51.409 20:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:51.409 20:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:51.409 20:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:37:51.409 20:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:37:51.409 "subsystems": [ 00:37:51.409 { 00:37:51.409 "subsystem": "keyring", 00:37:51.409 "config": [ 00:37:51.409 { 00:37:51.409 "method": "keyring_file_add_key", 00:37:51.409 "params": { 00:37:51.409 "name": "key0", 00:37:51.409 "path": "/tmp/tmp.gTlJJjye3R" 00:37:51.409 } 00:37:51.409 } 00:37:51.409 ] 00:37:51.409 }, 00:37:51.409 { 00:37:51.409 "subsystem": "iobuf", 00:37:51.409 "config": [ 00:37:51.409 { 00:37:51.409 "method": "iobuf_set_options", 00:37:51.409 "params": { 00:37:51.409 "large_bufsize": 135168, 00:37:51.409 "large_pool_count": 1024, 00:37:51.409 "small_bufsize": 8192, 00:37:51.409 "small_pool_count": 8192 00:37:51.409 } 00:37:51.409 } 00:37:51.409 ] 00:37:51.409 }, 00:37:51.409 { 00:37:51.409 "subsystem": "sock", 00:37:51.409 "config": [ 00:37:51.409 { 00:37:51.409 "method": "sock_set_default_impl", 00:37:51.409 "params": { 00:37:51.409 "impl_name": "posix" 00:37:51.409 } 00:37:51.409 }, 00:37:51.409 { 00:37:51.409 "method": "sock_impl_set_options", 00:37:51.409 "params": { 00:37:51.409 "enable_ktls": false, 00:37:51.409 "enable_placement_id": 0, 00:37:51.409 "enable_quickack": false, 00:37:51.409 "enable_recv_pipe": true, 00:37:51.409 "enable_zerocopy_send_client": false, 00:37:51.409 "enable_zerocopy_send_server": true, 00:37:51.409 "impl_name": "ssl", 00:37:51.409 "recv_buf_size": 4096, 00:37:51.409 "send_buf_size": 4096, 00:37:51.409 "tls_version": 0, 00:37:51.409 "zerocopy_threshold": 0 00:37:51.409 } 00:37:51.409 }, 00:37:51.409 { 00:37:51.409 "method": "sock_impl_set_options", 00:37:51.409 "params": { 00:37:51.409 "enable_ktls": false, 00:37:51.409 "enable_placement_id": 0, 00:37:51.409 "enable_quickack": false, 00:37:51.409 "enable_recv_pipe": true, 00:37:51.409 "enable_zerocopy_send_client": false, 00:37:51.409 "enable_zerocopy_send_server": true, 00:37:51.409 "impl_name": "posix", 00:37:51.409 "recv_buf_size": 2097152, 00:37:51.409 "send_buf_size": 2097152, 00:37:51.409 "tls_version": 0, 00:37:51.409 "zerocopy_threshold": 0 00:37:51.409 } 00:37:51.409 } 00:37:51.409 ] 00:37:51.409 }, 00:37:51.409 { 00:37:51.409 "subsystem": "vmd", 00:37:51.409 "config": [] 00:37:51.409 }, 00:37:51.409 { 00:37:51.409 "subsystem": "accel", 00:37:51.409 "config": [ 00:37:51.409 { 00:37:51.409 "method": "accel_set_options", 00:37:51.409 "params": { 00:37:51.409 "buf_count": 2048, 00:37:51.409 "large_cache_size": 16, 00:37:51.409 "sequence_count": 2048, 00:37:51.409 "small_cache_size": 128, 00:37:51.409 "task_count": 2048 00:37:51.409 } 00:37:51.409 } 00:37:51.409 ] 00:37:51.409 }, 00:37:51.409 { 00:37:51.409 "subsystem": "bdev", 00:37:51.409 "config": [ 00:37:51.409 { 00:37:51.409 "method": "bdev_set_options", 00:37:51.409 "params": { 00:37:51.409 "bdev_auto_examine": true, 00:37:51.409 "bdev_io_cache_size": 256, 00:37:51.409 "bdev_io_pool_size": 65535, 00:37:51.409 "bdev_io_stack_size": 4096, 00:37:51.409 "iobuf_large_cache_size": 16, 00:37:51.409 "iobuf_small_cache_size": 128 00:37:51.409 } 00:37:51.409 }, 00:37:51.409 { 00:37:51.409 "method": "bdev_raid_set_options", 00:37:51.409 "params": { 00:37:51.409 "process_max_bandwidth_mb_sec": 0, 00:37:51.409 "process_window_size_kb": 1024 00:37:51.410 } 00:37:51.410 }, 00:37:51.410 { 00:37:51.410 "method": "bdev_iscsi_set_options", 00:37:51.410 "params": { 00:37:51.410 "timeout_sec": 30 00:37:51.410 } 00:37:51.410 }, 00:37:51.410 { 00:37:51.410 "method": "bdev_nvme_set_options", 00:37:51.410 "params": { 00:37:51.410 "action_on_timeout": "none", 00:37:51.410 "allow_accel_sequence": false, 00:37:51.410 "arbitration_burst": 0, 00:37:51.410 "bdev_retry_count": 3, 00:37:51.410 "ctrlr_loss_timeout_sec": 0, 00:37:51.410 "delay_cmd_submit": true, 00:37:51.410 "dhchap_dhgroups": [ 00:37:51.410 "null", 00:37:51.410 "ffdhe2048", 00:37:51.410 "ffdhe3072", 00:37:51.410 "ffdhe4096", 00:37:51.410 "ffdhe6144", 00:37:51.410 "ffdhe8192" 00:37:51.410 ], 00:37:51.410 "dhchap_digests": [ 00:37:51.410 "sha256", 00:37:51.410 "sha384", 00:37:51.410 "sha512" 00:37:51.410 ], 00:37:51.410 "disable_auto_failback": false, 00:37:51.410 "fast_io_fail_timeout_sec": 0, 00:37:51.410 "generate_uuids": false, 00:37:51.410 "high_priority_weight": 0, 00:37:51.410 "io_path_stat": false, 00:37:51.410 "io_queue_requests": 512, 00:37:51.410 "keep_alive_timeout_ms": 10000, 00:37:51.410 "low_priority_weight": 0, 00:37:51.410 "medium_priority_weight": 0, 00:37:51.410 "nvme_adminq_poll_period_us": 10000, 00:37:51.410 "nvme_error_stat": false, 00:37:51.410 "nvme_ioq_poll_period_us": 0, 00:37:51.410 "rdma_cm_event_timeout_ms": 0, 00:37:51.410 "rdma_max_cq_size": 0, 00:37:51.410 "rdma_srq_size": 0, 00:37:51.410 "reconnect_delay_sec": 0, 00:37:51.410 "timeout_admin_us": 0, 00:37:51.410 "timeout_us": 0, 00:37:51.410 "transport_ack_timeout": 0, 00:37:51.410 "transport_retry_count": 4, 00:37:51.410 "transport_tos": 0 00:37:51.410 } 00:37:51.410 }, 00:37:51.410 { 00:37:51.410 "method": "bdev_nvme_attach_controller", 00:37:51.410 "params": { 00:37:51.410 "adrfam": "IPv4", 00:37:51.410 "ctrlr_loss_timeout_sec": 0, 00:37:51.410 "ddgst": false, 00:37:51.410 "fast_io_fail_timeout_sec": 0, 00:37:51.410 "hdgst": false, 00:37:51.410 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:51.410 "name": "nvme0", 00:37:51.410 "prchk_guard": false, 00:37:51.410 "prchk_reftag": false, 00:37:51.410 "psk": "key0", 00:37:51.410 "reconnect_delay_sec": 0, 00:37:51.410 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:51.410 "traddr": "10.0.0.3", 00:37:51.410 "trsvcid": "4420", 00:37:51.410 "trtype": "TCP" 00:37:51.410 } 00:37:51.410 }, 00:37:51.410 { 00:37:51.410 "method": "bdev_nvme_set_hotplug", 00:37:51.410 "params": { 00:37:51.410 "enable": false, 00:37:51.410 "period_us": 100000 00:37:51.410 } 00:37:51.410 }, 00:37:51.410 { 00:37:51.410 "method": "bdev_enable_histogram", 00:37:51.410 "params": { 00:37:51.410 "enable": true, 00:37:51.410 "name": "nvme0n1" 00:37:51.410 } 00:37:51.410 }, 00:37:51.410 { 00:37:51.410 "method": "bdev_wait_for_examine" 00:37:51.410 } 00:37:51.410 ] 00:37:51.410 }, 00:37:51.410 { 00:37:51.410 "subsystem": "nbd", 00:37:51.410 "config": [] 00:37:51.410 } 00:37:51.410 ] 00:37:51.410 }' 00:37:51.669 [2024-10-01 20:36:46.673635] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:37:51.669 [2024-10-01 20:36:46.673909] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84255 ] 00:37:51.669 [2024-10-01 20:36:46.815416] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:51.669 [2024-10-01 20:36:46.887874] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:51.929 [2024-10-01 20:36:47.071415] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:52.496 20:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:52.496 20:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:37:52.496 20:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:37:52.496 20:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:37:52.754 20:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:52.755 20:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:53.014 Running I/O for 1 seconds... 00:37:53.955 4736.00 IOPS, 18.50 MiB/s 00:37:53.955 Latency(us) 00:37:53.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:53.955 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:37:53.955 Verification LBA range: start 0x0 length 0x2000 00:37:53.955 nvme0n1 : 1.02 4778.32 18.67 0.00 0.00 26539.75 10664.49 22043.93 00:37:53.955 =================================================================================================================== 00:37:53.955 Total : 4778.32 18.67 0.00 0.00 26539.75 10664.49 22043.93 00:37:53.955 { 00:37:53.955 "results": [ 00:37:53.955 { 00:37:53.955 "job": "nvme0n1", 00:37:53.955 "core_mask": "0x2", 00:37:53.955 "workload": "verify", 00:37:53.955 "status": "finished", 00:37:53.955 "verify_range": { 00:37:53.955 "start": 0, 00:37:53.955 "length": 8192 00:37:53.955 }, 00:37:53.955 "queue_depth": 128, 00:37:53.955 "io_size": 4096, 00:37:53.955 "runtime": 1.017931, 00:37:53.955 "iops": 4778.319945065039, 00:37:53.955 "mibps": 18.66531228541031, 00:37:53.955 "io_failed": 0, 00:37:53.955 "io_timeout": 0, 00:37:53.955 "avg_latency_us": 26539.752727272727, 00:37:53.955 "min_latency_us": 10664.494545454545, 00:37:53.955 "max_latency_us": 22043.927272727273 00:37:53.955 } 00:37:53.955 ], 00:37:53.955 "core_count": 1 00:37:53.955 } 00:37:53.955 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:37:53.955 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:37:53.955 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:37:53.955 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:37:53.955 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:37:53.955 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:37:53.955 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:37:53.955 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:37:53.955 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:37:53.955 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:37:53.955 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:37:53.955 nvmf_trace.0 00:37:53.955 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:37:53.955 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 84255 00:37:53.955 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84255 ']' 00:37:53.955 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84255 00:37:54.215 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:37:54.215 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:54.215 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84255 00:37:54.215 killing process with pid 84255 00:37:54.215 Received shutdown signal, test time was about 1.000000 seconds 00:37:54.215 00:37:54.215 Latency(us) 00:37:54.215 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:54.215 =================================================================================================================== 00:37:54.215 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:54.215 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:54.215 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:54.215 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84255' 00:37:54.215 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84255 00:37:54.215 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84255 00:37:54.215 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:37:54.215 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:54.215 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:37:54.475 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:54.475 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:37:54.475 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:54.475 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:54.475 rmmod nvme_tcp 00:37:54.475 rmmod nvme_fabrics 00:37:54.475 rmmod nvme_keyring 00:37:54.475 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:54.475 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:37:54.475 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:37:54.475 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 84211 ']' 00:37:54.475 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 84211 00:37:54.475 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84211 ']' 00:37:54.475 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84211 00:37:54.475 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:37:54.475 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:54.475 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84211 00:37:54.475 killing process with pid 84211 00:37:54.475 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:54.475 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:54.475 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84211' 00:37:54.475 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84211 00:37:54.475 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84211 00:37:54.734 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:37:54.734 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:54.734 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:54.734 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:37:54.734 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:37:54.734 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:54.734 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:37:54.734 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:54.734 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:37:54.734 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:37:54.734 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:37:54.734 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:37:54.734 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:37:54.734 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:37:54.734 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:37:54.734 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:37:54.734 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:37:54.734 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:37:54.734 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:37:54.734 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:37:55.003 20:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:55.003 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:55.003 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:37:55.003 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:55.003 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:55.003 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:55.003 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:37:55.003 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.9RHKYJu84C /tmp/tmp.0EgO7c1W1P /tmp/tmp.gTlJJjye3R 00:37:55.003 ************************************ 00:37:55.003 END TEST nvmf_tls 00:37:55.003 ************************************ 00:37:55.003 00:37:55.003 real 1m21.946s 00:37:55.003 user 2m12.445s 00:37:55.003 sys 0m26.896s 00:37:55.003 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:55.003 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:55.003 20:36:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:37:55.004 20:36:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:55.004 20:36:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:55.004 20:36:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:37:55.004 ************************************ 00:37:55.004 START TEST nvmf_fips 00:37:55.004 ************************************ 00:37:55.004 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:37:55.004 * Looking for test storage... 00:37:55.004 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:37:55.004 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:55.004 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:37:55.004 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:55.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:55.278 --rc genhtml_branch_coverage=1 00:37:55.278 --rc genhtml_function_coverage=1 00:37:55.278 --rc genhtml_legend=1 00:37:55.278 --rc geninfo_all_blocks=1 00:37:55.278 --rc geninfo_unexecuted_blocks=1 00:37:55.278 00:37:55.278 ' 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:55.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:55.278 --rc genhtml_branch_coverage=1 00:37:55.278 --rc genhtml_function_coverage=1 00:37:55.278 --rc genhtml_legend=1 00:37:55.278 --rc geninfo_all_blocks=1 00:37:55.278 --rc geninfo_unexecuted_blocks=1 00:37:55.278 00:37:55.278 ' 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:55.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:55.278 --rc genhtml_branch_coverage=1 00:37:55.278 --rc genhtml_function_coverage=1 00:37:55.278 --rc genhtml_legend=1 00:37:55.278 --rc geninfo_all_blocks=1 00:37:55.278 --rc geninfo_unexecuted_blocks=1 00:37:55.278 00:37:55.278 ' 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:55.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:55.278 --rc genhtml_branch_coverage=1 00:37:55.278 --rc genhtml_function_coverage=1 00:37:55.278 --rc genhtml_legend=1 00:37:55.278 --rc geninfo_all_blocks=1 00:37:55.278 --rc geninfo_unexecuted_blocks=1 00:37:55.278 00:37:55.278 ' 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:55.278 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:55.279 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:37:55.279 Error setting digest 00:37:55.279 40E27A76CE7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:37:55.279 40E27A76CE7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:37:55.279 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:37:55.280 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:55.280 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:55.280 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:55.280 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:37:55.280 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:55.280 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:55.280 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:55.280 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:55.280 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:55.280 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:55.280 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:55.280 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:55.280 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:37:55.280 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:37:55.280 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:37:55.280 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:37:55.280 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:37:55.280 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@458 -- # nvmf_veth_init 00:37:55.280 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:55.280 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:37:55.280 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:37:55.280 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:37:55.280 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:55.280 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:37:55.280 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:37:55.280 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:37:55.280 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:37:55.280 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:37:55.280 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:37:55.280 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:55.280 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:37:55.280 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:37:55.280 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:37:55.280 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:37:55.280 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:37:55.280 Cannot find device "nvmf_init_br" 00:37:55.280 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:37:55.280 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:37:55.539 Cannot find device "nvmf_init_br2" 00:37:55.539 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:37:55.539 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:37:55.539 Cannot find device "nvmf_tgt_br" 00:37:55.539 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:37:55.539 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:37:55.539 Cannot find device "nvmf_tgt_br2" 00:37:55.539 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:37:55.539 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:37:55.539 Cannot find device "nvmf_init_br" 00:37:55.539 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:37:55.539 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:37:55.539 Cannot find device "nvmf_init_br2" 00:37:55.539 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:37:55.539 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:37:55.539 Cannot find device "nvmf_tgt_br" 00:37:55.539 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:37:55.539 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:37:55.539 Cannot find device "nvmf_tgt_br2" 00:37:55.539 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:37:55.540 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:37:55.540 Cannot find device "nvmf_br" 00:37:55.540 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:37:55.540 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:37:55.540 Cannot find device "nvmf_init_if" 00:37:55.540 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:37:55.540 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:37:55.540 Cannot find device "nvmf_init_if2" 00:37:55.540 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:37:55.540 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:55.540 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:55.540 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:37:55.540 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:55.540 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:55.540 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:37:55.540 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:37:55.540 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:37:55.540 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:37:55.540 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:37:55.540 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:37:55.540 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:37:55.540 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:37:55.540 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:37:55.540 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:37:55.540 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:37:55.540 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:37:55.540 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:37:55.540 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:37:55.540 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:37:55.540 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:37:55.540 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:37:55.540 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:37:55.800 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:37:55.800 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:37:55.800 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:37:55.800 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:37:55.800 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:37:55.800 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:37:55.800 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:37:55.800 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:37:55.800 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:37:55.800 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:37:55.800 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:37:55.800 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:37:55.800 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:37:55.800 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:37:55.800 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:37:55.800 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:37:55.800 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:37:55.800 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:37:55.800 00:37:55.800 --- 10.0.0.3 ping statistics --- 00:37:55.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:55.800 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:37:55.800 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:37:55.800 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:37:55.800 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:37:55.800 00:37:55.800 --- 10.0.0.4 ping statistics --- 00:37:55.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:55.800 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:37:55.800 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:37:55.800 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:55.800 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:37:55.800 00:37:55.800 --- 10.0.0.1 ping statistics --- 00:37:55.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:55.800 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:37:55.800 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:37:55.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:55.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:37:55.800 00:37:55.800 --- 10.0.0.2 ping statistics --- 00:37:55.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:55.800 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:37:55.800 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:55.800 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # return 0 00:37:55.800 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:55.800 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:55.800 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:55.800 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:55.800 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:55.800 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:55.800 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:55.800 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:37:55.800 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:55.800 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:55.800 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:37:55.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:55.800 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=84589 00:37:55.800 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 84589 00:37:55.800 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 84589 ']' 00:37:55.800 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:55.800 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:37:55.800 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:55.800 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:55.800 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:55.800 20:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:37:55.800 [2024-10-01 20:36:51.034913] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:37:55.800 [2024-10-01 20:36:51.035010] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:56.059 [2024-10-01 20:36:51.176288] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:56.059 [2024-10-01 20:36:51.248612] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:56.059 [2024-10-01 20:36:51.248867] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:56.059 [2024-10-01 20:36:51.248973] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:56.059 [2024-10-01 20:36:51.249086] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:56.059 [2024-10-01 20:36:51.249216] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:56.059 [2024-10-01 20:36:51.249344] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:56.997 20:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:56.997 20:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:37:56.997 20:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:56.997 20:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:56.997 20:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:37:56.997 20:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:56.997 20:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:37:56.997 20:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:37:56.997 20:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:37:56.997 20:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.gS2 00:37:56.997 20:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:37:56.997 20:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.gS2 00:37:56.997 20:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.gS2 00:37:56.997 20:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.gS2 00:37:56.997 20:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:57.256 [2024-10-01 20:36:52.382000] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:57.256 [2024-10-01 20:36:52.397974] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:57.256 [2024-10-01 20:36:52.398457] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:37:57.256 malloc0 00:37:57.256 20:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:57.256 20:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=84650 00:37:57.256 20:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:37:57.256 20:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 84650 /var/tmp/bdevperf.sock 00:37:57.256 20:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 84650 ']' 00:37:57.256 20:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:57.256 20:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:57.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:57.256 20:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:57.256 20:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:57.256 20:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:37:57.515 [2024-10-01 20:36:52.523866] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:37:57.515 [2024-10-01 20:36:52.523930] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84650 ] 00:37:57.515 [2024-10-01 20:36:52.659202] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:57.516 [2024-10-01 20:36:52.727328] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:37:57.775 20:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:57.775 20:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:37:57.775 20:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.gS2 00:37:58.034 20:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:37:58.293 [2024-10-01 20:36:53.350746] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:58.293 TLSTESTn1 00:37:58.293 20:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:58.293 Running I/O for 10 seconds... 00:38:08.681 4480.00 IOPS, 17.50 MiB/s 4480.00 IOPS, 17.50 MiB/s 4516.67 IOPS, 17.64 MiB/s 4530.50 IOPS, 17.70 MiB/s 4547.20 IOPS, 17.76 MiB/s 4559.67 IOPS, 17.81 MiB/s 4562.86 IOPS, 17.82 MiB/s 4573.50 IOPS, 17.87 MiB/s 4579.56 IOPS, 17.89 MiB/s 4582.40 IOPS, 17.90 MiB/s 00:38:08.681 Latency(us) 00:38:08.681 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:08.681 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:38:08.681 Verification LBA range: start 0x0 length 0x2000 00:38:08.681 TLSTESTn1 : 10.02 4585.14 17.91 0.00 0.00 27866.30 9889.98 23235.49 00:38:08.681 =================================================================================================================== 00:38:08.681 Total : 4585.14 17.91 0.00 0.00 27866.30 9889.98 23235.49 00:38:08.681 { 00:38:08.681 "results": [ 00:38:08.681 { 00:38:08.681 "job": "TLSTESTn1", 00:38:08.681 "core_mask": "0x4", 00:38:08.681 "workload": "verify", 00:38:08.681 "status": "finished", 00:38:08.681 "verify_range": { 00:38:08.681 "start": 0, 00:38:08.681 "length": 8192 00:38:08.681 }, 00:38:08.681 "queue_depth": 128, 00:38:08.681 "io_size": 4096, 00:38:08.681 "runtime": 10.021949, 00:38:08.681 "iops": 4585.136084807456, 00:38:08.681 "mibps": 17.910687831279127, 00:38:08.681 "io_failed": 0, 00:38:08.681 "io_timeout": 0, 00:38:08.681 "avg_latency_us": 27866.298181818183, 00:38:08.681 "min_latency_us": 9889.978181818182, 00:38:08.681 "max_latency_us": 23235.49090909091 00:38:08.681 } 00:38:08.681 ], 00:38:08.681 "core_count": 1 00:38:08.681 } 00:38:08.681 20:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:38:08.681 20:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:38:08.681 20:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:38:08.681 20:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:38:08.681 20:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:38:08.681 20:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:38:08.681 20:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:38:08.681 20:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:38:08.681 20:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:38:08.681 20:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:38:08.681 nvmf_trace.0 00:38:08.681 20:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:38:08.681 20:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 84650 00:38:08.681 20:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 84650 ']' 00:38:08.681 20:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 84650 00:38:08.681 20:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:38:08.681 20:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:08.681 20:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84650 00:38:08.681 20:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:38:08.681 20:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:38:08.681 killing process with pid 84650 00:38:08.681 20:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84650' 00:38:08.681 20:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 84650 00:38:08.681 Received shutdown signal, test time was about 10.000000 seconds 00:38:08.681 00:38:08.681 Latency(us) 00:38:08.681 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:08.681 =================================================================================================================== 00:38:08.681 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:08.681 20:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 84650 00:38:08.681 20:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:38:08.681 20:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:08.681 20:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:38:08.940 20:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:08.940 20:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:38:08.940 20:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:08.940 20:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:08.940 rmmod nvme_tcp 00:38:08.940 rmmod nvme_fabrics 00:38:08.940 rmmod nvme_keyring 00:38:08.940 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:08.940 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:38:08.940 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:38:08.940 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 84589 ']' 00:38:08.940 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 84589 00:38:08.940 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 84589 ']' 00:38:08.940 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 84589 00:38:08.940 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:38:08.940 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:08.940 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84589 00:38:08.940 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:08.940 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:08.940 killing process with pid 84589 00:38:08.940 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84589' 00:38:08.940 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 84589 00:38:08.940 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 84589 00:38:09.200 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:09.200 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:09.200 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:09.200 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:38:09.200 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:38:09.200 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:09.200 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:38:09.200 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:09.200 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:38:09.200 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:38:09.200 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:38:09.200 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:38:09.200 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:38:09.200 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:38:09.200 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:38:09.200 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:38:09.200 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:38:09.200 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:38:09.200 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:38:09.200 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:38:09.200 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:09.200 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:09.200 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:38:09.200 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:09.200 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:09.200 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:09.460 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:38:09.460 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.gS2 00:38:09.460 00:38:09.460 real 0m14.350s 00:38:09.460 user 0m19.199s 00:38:09.460 sys 0m5.888s 00:38:09.460 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:09.460 ************************************ 00:38:09.460 END TEST nvmf_fips 00:38:09.460 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:38:09.460 ************************************ 00:38:09.460 20:37:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:38:09.460 20:37:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:38:09.460 20:37:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:09.460 20:37:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:38:09.460 ************************************ 00:38:09.460 START TEST nvmf_control_msg_list 00:38:09.460 ************************************ 00:38:09.460 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:38:09.460 * Looking for test storage... 00:38:09.460 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:38:09.460 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:09.460 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:09.460 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:38:09.460 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:09.460 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:09.460 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:09.460 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:09.460 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:38:09.460 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:38:09.460 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:38:09.460 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:38:09.460 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:38:09.460 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:38:09.460 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:38:09.460 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:09.460 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:38:09.460 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:38:09.460 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:09.461 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:09.461 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:38:09.461 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:38:09.461 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:09.461 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:38:09.461 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:38:09.461 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:38:09.461 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:38:09.461 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:09.461 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:38:09.461 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:38:09.461 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:09.461 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:09.461 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:38:09.461 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:09.461 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:09.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:09.461 --rc genhtml_branch_coverage=1 00:38:09.461 --rc genhtml_function_coverage=1 00:38:09.461 --rc genhtml_legend=1 00:38:09.461 --rc geninfo_all_blocks=1 00:38:09.461 --rc geninfo_unexecuted_blocks=1 00:38:09.461 00:38:09.461 ' 00:38:09.461 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:09.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:09.461 --rc genhtml_branch_coverage=1 00:38:09.461 --rc genhtml_function_coverage=1 00:38:09.461 --rc genhtml_legend=1 00:38:09.461 --rc geninfo_all_blocks=1 00:38:09.461 --rc geninfo_unexecuted_blocks=1 00:38:09.461 00:38:09.461 ' 00:38:09.461 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:09.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:09.461 --rc genhtml_branch_coverage=1 00:38:09.461 --rc genhtml_function_coverage=1 00:38:09.461 --rc genhtml_legend=1 00:38:09.461 --rc geninfo_all_blocks=1 00:38:09.461 --rc geninfo_unexecuted_blocks=1 00:38:09.461 00:38:09.461 ' 00:38:09.461 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:09.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:09.461 --rc genhtml_branch_coverage=1 00:38:09.461 --rc genhtml_function_coverage=1 00:38:09.461 --rc genhtml_legend=1 00:38:09.461 --rc geninfo_all_blocks=1 00:38:09.461 --rc geninfo_unexecuted_blocks=1 00:38:09.461 00:38:09.461 ' 00:38:09.461 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:09.461 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:38:09.461 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:09.461 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:09.461 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:09.461 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:09.461 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:09.461 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:09.461 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:09.461 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:09.461 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:09.721 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:09.721 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:38:09.721 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:38:09.721 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:09.721 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:09.721 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:38:09.721 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:09.721 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:09.721 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:38:09.721 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:09.721 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:09.721 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:09.721 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:09.721 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:09.721 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:09.721 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:38:09.721 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:09.721 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:38:09.721 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:09.721 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:09.721 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:09.722 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@458 -- # nvmf_veth_init 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:38:09.722 Cannot find device "nvmf_init_br" 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:38:09.722 Cannot find device "nvmf_init_br2" 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:38:09.722 Cannot find device "nvmf_tgt_br" 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:38:09.722 Cannot find device "nvmf_tgt_br2" 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:38:09.722 Cannot find device "nvmf_init_br" 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:38:09.722 Cannot find device "nvmf_init_br2" 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:38:09.722 Cannot find device "nvmf_tgt_br" 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:38:09.722 Cannot find device "nvmf_tgt_br2" 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:38:09.722 Cannot find device "nvmf_br" 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:38:09.722 Cannot find device "nvmf_init_if" 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:38:09.722 Cannot find device "nvmf_init_if2" 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:09.722 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:09.722 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:38:09.722 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:38:09.986 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:38:09.986 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:38:09.986 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:38:09.986 20:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:38:09.986 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:38:09.986 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:38:09.986 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:38:09.986 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:38:09.986 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:38:09.986 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:38:09.986 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:38:09.986 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:38:09.986 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:38:09.986 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:38:09.986 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:38:09.986 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:38:09.986 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:38:09.986 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:38:09.986 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:38:09.986 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:38:09.986 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:38:09.986 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:38:09.986 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:38:09.986 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:38:09.986 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:38:09.986 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:38:09.986 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:38:09.986 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:38:09.986 00:38:09.986 --- 10.0.0.3 ping statistics --- 00:38:09.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:09.986 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:38:09.986 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:38:09.986 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:38:09.986 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:38:09.986 00:38:09.986 --- 10.0.0.4 ping statistics --- 00:38:09.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:09.986 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:38:09.986 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:38:09.986 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:09.986 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:38:09.986 00:38:09.986 --- 10.0.0.1 ping statistics --- 00:38:09.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:09.986 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:38:09.986 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:38:09.986 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:09.986 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:38:09.986 00:38:09.986 --- 10.0.0.2 ping statistics --- 00:38:09.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:09.986 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:38:09.986 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:09.986 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # return 0 00:38:09.987 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:09.987 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:09.987 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:09.987 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:09.987 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:09.987 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:09.987 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:09.987 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:38:09.987 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:09.987 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:09.987 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:38:09.987 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=85040 00:38:09.987 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:38:09.987 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 85040 00:38:09.987 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 85040 ']' 00:38:09.987 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:09.987 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:09.987 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:09.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:09.987 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:09.987 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:38:09.987 [2024-10-01 20:37:05.227993] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:38:09.987 [2024-10-01 20:37:05.228085] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:10.248 [2024-10-01 20:37:05.369603] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:10.248 [2024-10-01 20:37:05.441165] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:10.248 [2024-10-01 20:37:05.441231] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:10.248 [2024-10-01 20:37:05.441246] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:10.248 [2024-10-01 20:37:05.441257] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:10.248 [2024-10-01 20:37:05.441265] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:10.248 [2024-10-01 20:37:05.441299] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:10.508 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:10.508 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:38:10.508 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:10.508 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:10.508 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:38:10.508 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:10.508 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:38:10.508 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:38:10.508 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:38:10.508 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:10.508 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:38:10.508 [2024-10-01 20:37:05.654511] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:10.508 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:10.508 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:38:10.508 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:10.508 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:38:10.508 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:10.508 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:38:10.508 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:10.508 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:38:10.508 Malloc0 00:38:10.508 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:10.508 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:38:10.508 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:10.508 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:38:10.508 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:10.509 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:38:10.509 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:10.509 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:38:10.509 [2024-10-01 20:37:05.690285] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:38:10.509 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:10.509 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=85077 00:38:10.509 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:38:10.509 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=85078 00:38:10.509 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:38:10.509 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=85079 00:38:10.509 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:38:10.509 20:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 85077 00:38:10.768 [2024-10-01 20:37:05.868602] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:38:10.768 [2024-10-01 20:37:05.868921] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:38:10.768 [2024-10-01 20:37:05.878773] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:38:11.708 Initializing NVMe Controllers 00:38:11.708 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:38:11.708 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:38:11.708 Initialization complete. Launching workers. 00:38:11.708 ======================================================== 00:38:11.708 Latency(us) 00:38:11.708 Device Information : IOPS MiB/s Average min max 00:38:11.708 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3716.00 14.52 268.79 144.88 617.13 00:38:11.708 ======================================================== 00:38:11.708 Total : 3716.00 14.52 268.79 144.88 617.13 00:38:11.708 00:38:11.708 Initializing NVMe Controllers 00:38:11.708 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:38:11.708 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:38:11.708 Initialization complete. Launching workers. 00:38:11.708 ======================================================== 00:38:11.708 Latency(us) 00:38:11.708 Device Information : IOPS MiB/s Average min max 00:38:11.708 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3710.00 14.49 269.21 123.72 713.51 00:38:11.708 ======================================================== 00:38:11.708 Total : 3710.00 14.49 269.21 123.72 713.51 00:38:11.708 00:38:11.708 Initializing NVMe Controllers 00:38:11.708 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:38:11.708 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:38:11.708 Initialization complete. Launching workers. 00:38:11.708 ======================================================== 00:38:11.708 Latency(us) 00:38:11.708 Device Information : IOPS MiB/s Average min max 00:38:11.708 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3730.00 14.57 267.76 110.66 1605.21 00:38:11.708 ======================================================== 00:38:11.708 Total : 3730.00 14.57 267.76 110.66 1605.21 00:38:11.708 00:38:11.708 20:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 85078 00:38:11.708 20:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 85079 00:38:11.708 20:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:38:11.708 20:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:38:11.708 20:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:11.708 20:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:38:11.967 20:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:11.967 20:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:38:11.967 20:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:11.967 20:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:11.967 rmmod nvme_tcp 00:38:11.967 rmmod nvme_fabrics 00:38:11.967 rmmod nvme_keyring 00:38:11.967 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:11.967 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:38:11.967 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:38:11.967 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 85040 ']' 00:38:11.967 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 85040 00:38:11.967 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 85040 ']' 00:38:11.967 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 85040 00:38:11.967 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:38:11.967 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:11.967 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85040 00:38:11.967 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:11.967 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:11.967 killing process with pid 85040 00:38:11.967 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85040' 00:38:11.967 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 85040 00:38:11.967 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 85040 00:38:12.227 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:12.227 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:12.227 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:12.227 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:38:12.227 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:38:12.227 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:12.227 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:38:12.227 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:12.227 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:38:12.227 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:38:12.227 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:38:12.227 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:38:12.227 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:38:12.227 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:38:12.227 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:38:12.227 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:38:12.227 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:38:12.227 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:38:12.227 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:38:12.227 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:38:12.227 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:12.227 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:12.227 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:38:12.227 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:12.227 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:12.227 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:38:12.488 00:38:12.488 real 0m2.959s 00:38:12.488 user 0m4.674s 00:38:12.488 sys 0m1.399s 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:38:12.488 ************************************ 00:38:12.488 END TEST nvmf_control_msg_list 00:38:12.488 ************************************ 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:38:12.488 ************************************ 00:38:12.488 START TEST nvmf_wait_for_buf 00:38:12.488 ************************************ 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:38:12.488 * Looking for test storage... 00:38:12.488 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:12.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:12.488 --rc genhtml_branch_coverage=1 00:38:12.488 --rc genhtml_function_coverage=1 00:38:12.488 --rc genhtml_legend=1 00:38:12.488 --rc geninfo_all_blocks=1 00:38:12.488 --rc geninfo_unexecuted_blocks=1 00:38:12.488 00:38:12.488 ' 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:12.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:12.488 --rc genhtml_branch_coverage=1 00:38:12.488 --rc genhtml_function_coverage=1 00:38:12.488 --rc genhtml_legend=1 00:38:12.488 --rc geninfo_all_blocks=1 00:38:12.488 --rc geninfo_unexecuted_blocks=1 00:38:12.488 00:38:12.488 ' 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:12.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:12.488 --rc genhtml_branch_coverage=1 00:38:12.488 --rc genhtml_function_coverage=1 00:38:12.488 --rc genhtml_legend=1 00:38:12.488 --rc geninfo_all_blocks=1 00:38:12.488 --rc geninfo_unexecuted_blocks=1 00:38:12.488 00:38:12.488 ' 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:12.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:12.488 --rc genhtml_branch_coverage=1 00:38:12.488 --rc genhtml_function_coverage=1 00:38:12.488 --rc genhtml_legend=1 00:38:12.488 --rc geninfo_all_blocks=1 00:38:12.488 --rc geninfo_unexecuted_blocks=1 00:38:12.488 00:38:12.488 ' 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:12.488 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:12.489 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@458 -- # nvmf_veth_init 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:38:12.489 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:38:12.750 Cannot find device "nvmf_init_br" 00:38:12.750 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:38:12.750 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:38:12.750 Cannot find device "nvmf_init_br2" 00:38:12.750 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:38:12.750 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:38:12.750 Cannot find device "nvmf_tgt_br" 00:38:12.750 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:38:12.750 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:38:12.750 Cannot find device "nvmf_tgt_br2" 00:38:12.750 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:38:12.750 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:38:12.750 Cannot find device "nvmf_init_br" 00:38:12.750 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:38:12.750 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:38:12.750 Cannot find device "nvmf_init_br2" 00:38:12.750 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:38:12.750 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:38:12.750 Cannot find device "nvmf_tgt_br" 00:38:12.750 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:38:12.750 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:38:12.750 Cannot find device "nvmf_tgt_br2" 00:38:12.750 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:38:12.750 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:38:12.750 Cannot find device "nvmf_br" 00:38:12.750 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:38:12.750 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:38:12.750 Cannot find device "nvmf_init_if" 00:38:12.750 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:38:12.750 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:38:12.750 Cannot find device "nvmf_init_if2" 00:38:12.750 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:38:12.750 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:12.750 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:12.750 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:38:12.750 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:12.750 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:12.750 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:38:12.750 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:38:12.750 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:38:12.750 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:38:12.750 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:38:12.750 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:38:12.750 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:38:12.750 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:38:12.750 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:38:12.750 20:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:38:13.009 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:38:13.009 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:38:13.009 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:38:13.009 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:38:13.009 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:38:13.009 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:38:13.009 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:38:13.009 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:38:13.010 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:38:13.010 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:38:13.010 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:38:13.010 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:38:13.010 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:38:13.010 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:38:13.010 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:38:13.010 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:38:13.010 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:38:13.010 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:38:13.010 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:38:13.010 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:38:13.010 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:38:13.010 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:38:13.010 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:38:13.010 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:38:13.010 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:38:13.010 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:38:13.010 00:38:13.010 --- 10.0.0.3 ping statistics --- 00:38:13.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:13.010 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:38:13.010 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:38:13.010 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:38:13.010 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:38:13.010 00:38:13.010 --- 10.0.0.4 ping statistics --- 00:38:13.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:13.010 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:38:13.010 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:38:13.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:13.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:38:13.010 00:38:13.010 --- 10.0.0.1 ping statistics --- 00:38:13.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:13.010 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:38:13.010 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:38:13.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:13.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:38:13.010 00:38:13.010 --- 10.0.0.2 ping statistics --- 00:38:13.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:13.010 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:38:13.010 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:13.010 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # return 0 00:38:13.010 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:13.010 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:13.010 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:13.010 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:13.010 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:13.010 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:13.010 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:13.010 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:38:13.010 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:13.010 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:13.010 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:38:13.010 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=85312 00:38:13.010 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:38:13.010 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 85312 00:38:13.010 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 85312 ']' 00:38:13.010 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:13.010 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:13.010 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:13.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:13.010 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:13.010 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:38:13.010 [2024-10-01 20:37:08.230851] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:38:13.010 [2024-10-01 20:37:08.230946] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:13.270 [2024-10-01 20:37:08.366082] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:13.270 [2024-10-01 20:37:08.416433] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:13.270 [2024-10-01 20:37:08.416486] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:13.270 [2024-10-01 20:37:08.416510] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:13.270 [2024-10-01 20:37:08.416517] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:13.270 [2024-10-01 20:37:08.416523] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:13.270 [2024-10-01 20:37:08.416547] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:13.270 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:13.270 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:38:13.270 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:13.270 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:13.270 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:38:13.531 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:13.531 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:38:13.531 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:38:13.531 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:38:13.531 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:13.531 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:38:13.531 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:13.531 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:38:13.531 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:13.531 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:38:13.531 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:13.531 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:38:13.531 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:13.531 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:38:13.531 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:13.531 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:38:13.531 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:13.531 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:38:13.531 Malloc0 00:38:13.531 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:13.531 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:38:13.531 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:13.531 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:38:13.531 [2024-10-01 20:37:08.672920] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:13.531 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:13.531 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:38:13.531 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:13.531 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:38:13.531 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:13.531 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:38:13.531 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:13.531 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:38:13.531 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:13.531 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:38:13.531 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:13.531 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:38:13.531 [2024-10-01 20:37:08.705017] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:38:13.531 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:13.531 20:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:38:13.791 [2024-10-01 20:37:08.882310] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:38:15.169 Initializing NVMe Controllers 00:38:15.169 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:38:15.169 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:38:15.169 Initialization complete. Launching workers. 00:38:15.169 ======================================================== 00:38:15.169 Latency(us) 00:38:15.169 Device Information : IOPS MiB/s Average min max 00:38:15.169 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 128.49 16.06 32247.89 8019.83 64035.30 00:38:15.169 ======================================================== 00:38:15.169 Total : 128.49 16.06 32247.89 8019.83 64035.30 00:38:15.169 00:38:15.169 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:38:15.169 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:38:15.169 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:15.169 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:38:15.169 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:15.169 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:38:15.169 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:38:15.169 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:38:15.169 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:38:15.169 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:15.169 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:38:15.169 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:15.169 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:38:15.169 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:15.169 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:15.169 rmmod nvme_tcp 00:38:15.169 rmmod nvme_fabrics 00:38:15.169 rmmod nvme_keyring 00:38:15.169 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:15.169 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:38:15.169 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:38:15.169 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 85312 ']' 00:38:15.169 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 85312 00:38:15.169 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 85312 ']' 00:38:15.169 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 85312 00:38:15.169 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:38:15.169 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:15.169 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85312 00:38:15.169 killing process with pid 85312 00:38:15.169 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:15.169 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:15.169 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85312' 00:38:15.169 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 85312 00:38:15.169 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 85312 00:38:15.428 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:15.428 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:15.428 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:15.428 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:38:15.428 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:38:15.428 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:15.428 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:38:15.428 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:15.428 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:38:15.428 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:38:15.428 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:38:15.428 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:38:15.428 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:38:15.428 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:38:15.688 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:38:15.688 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:38:15.688 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:38:15.688 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:38:15.688 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:38:15.688 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:38:15.688 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:15.688 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:15.688 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:38:15.688 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:15.688 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:15.688 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:15.688 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:38:15.688 00:38:15.688 real 0m3.338s 00:38:15.688 user 0m2.704s 00:38:15.688 sys 0m0.728s 00:38:15.688 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:15.688 20:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:38:15.688 ************************************ 00:38:15.688 END TEST nvmf_wait_for_buf 00:38:15.688 ************************************ 00:38:15.688 20:37:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:38:15.688 20:37:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:38:15.688 20:37:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:38:15.688 00:38:15.688 real 7m7.671s 00:38:15.688 user 17m23.099s 00:38:15.688 sys 1m23.016s 00:38:15.688 20:37:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:15.688 20:37:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:38:15.688 ************************************ 00:38:15.688 END TEST nvmf_target_extra 00:38:15.688 ************************************ 00:38:15.949 20:37:10 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:38:15.949 20:37:10 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:38:15.949 20:37:10 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:15.949 20:37:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:15.949 ************************************ 00:38:15.949 START TEST nvmf_host 00:38:15.949 ************************************ 00:38:15.949 20:37:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:38:15.949 * Looking for test storage... 00:38:15.949 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:15.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:15.949 --rc genhtml_branch_coverage=1 00:38:15.949 --rc genhtml_function_coverage=1 00:38:15.949 --rc genhtml_legend=1 00:38:15.949 --rc geninfo_all_blocks=1 00:38:15.949 --rc geninfo_unexecuted_blocks=1 00:38:15.949 00:38:15.949 ' 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:15.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:15.949 --rc genhtml_branch_coverage=1 00:38:15.949 --rc genhtml_function_coverage=1 00:38:15.949 --rc genhtml_legend=1 00:38:15.949 --rc geninfo_all_blocks=1 00:38:15.949 --rc geninfo_unexecuted_blocks=1 00:38:15.949 00:38:15.949 ' 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:15.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:15.949 --rc genhtml_branch_coverage=1 00:38:15.949 --rc genhtml_function_coverage=1 00:38:15.949 --rc genhtml_legend=1 00:38:15.949 --rc geninfo_all_blocks=1 00:38:15.949 --rc geninfo_unexecuted_blocks=1 00:38:15.949 00:38:15.949 ' 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:15.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:15.949 --rc genhtml_branch_coverage=1 00:38:15.949 --rc genhtml_function_coverage=1 00:38:15.949 --rc genhtml_legend=1 00:38:15.949 --rc geninfo_all_blocks=1 00:38:15.949 --rc geninfo_unexecuted_blocks=1 00:38:15.949 00:38:15.949 ' 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:38:15.949 20:37:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:38:15.950 20:37:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:15.950 20:37:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:15.950 20:37:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:38:15.950 20:37:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:15.950 20:37:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:15.950 20:37:11 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:38:15.950 20:37:11 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:15.950 20:37:11 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:15.950 20:37:11 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:15.950 20:37:11 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:15.950 20:37:11 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:15.950 20:37:11 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:15.950 20:37:11 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:38:15.950 20:37:11 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:15.950 20:37:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:38:15.950 20:37:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:15.950 20:37:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:15.950 20:37:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:15.950 20:37:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:15.950 20:37:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:15.950 20:37:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:15.950 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:15.950 20:37:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:15.950 20:37:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:15.950 20:37:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:15.950 20:37:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:38:15.950 20:37:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:38:15.950 20:37:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:38:15.950 20:37:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:38:15.950 20:37:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:38:15.950 20:37:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:15.950 20:37:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:15.950 ************************************ 00:38:15.950 START TEST nvmf_multicontroller 00:38:15.950 ************************************ 00:38:15.950 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:38:16.211 * Looking for test storage... 00:38:16.211 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lcov --version 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:16.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:16.211 --rc genhtml_branch_coverage=1 00:38:16.211 --rc genhtml_function_coverage=1 00:38:16.211 --rc genhtml_legend=1 00:38:16.211 --rc geninfo_all_blocks=1 00:38:16.211 --rc geninfo_unexecuted_blocks=1 00:38:16.211 00:38:16.211 ' 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:16.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:16.211 --rc genhtml_branch_coverage=1 00:38:16.211 --rc genhtml_function_coverage=1 00:38:16.211 --rc genhtml_legend=1 00:38:16.211 --rc geninfo_all_blocks=1 00:38:16.211 --rc geninfo_unexecuted_blocks=1 00:38:16.211 00:38:16.211 ' 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:16.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:16.211 --rc genhtml_branch_coverage=1 00:38:16.211 --rc genhtml_function_coverage=1 00:38:16.211 --rc genhtml_legend=1 00:38:16.211 --rc geninfo_all_blocks=1 00:38:16.211 --rc geninfo_unexecuted_blocks=1 00:38:16.211 00:38:16.211 ' 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:16.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:16.211 --rc genhtml_branch_coverage=1 00:38:16.211 --rc genhtml_function_coverage=1 00:38:16.211 --rc genhtml_legend=1 00:38:16.211 --rc geninfo_all_blocks=1 00:38:16.211 --rc geninfo_unexecuted_blocks=1 00:38:16.211 00:38:16.211 ' 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:16.211 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:16.212 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@458 -- # nvmf_veth_init 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:38:16.212 Cannot find device "nvmf_init_br" 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:38:16.212 Cannot find device "nvmf_init_br2" 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:38:16.212 Cannot find device "nvmf_tgt_br" 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # true 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:38:16.212 Cannot find device "nvmf_tgt_br2" 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # true 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:38:16.212 Cannot find device "nvmf_init_br" 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # true 00:38:16.212 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:38:16.212 Cannot find device "nvmf_init_br2" 00:38:16.471 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # true 00:38:16.471 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:38:16.471 Cannot find device "nvmf_tgt_br" 00:38:16.471 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # true 00:38:16.471 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:38:16.471 Cannot find device "nvmf_tgt_br2" 00:38:16.471 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # true 00:38:16.471 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:38:16.471 Cannot find device "nvmf_br" 00:38:16.471 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # true 00:38:16.471 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:38:16.471 Cannot find device "nvmf_init_if" 00:38:16.471 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # true 00:38:16.471 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:38:16.471 Cannot find device "nvmf_init_if2" 00:38:16.471 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # true 00:38:16.471 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:16.471 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:16.471 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # true 00:38:16.471 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:16.471 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:16.471 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # true 00:38:16.471 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:38:16.471 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:38:16.471 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:38:16.471 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:38:16.471 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:38:16.471 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:38:16.471 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:38:16.471 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:38:16.471 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:38:16.471 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:38:16.471 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:38:16.471 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:38:16.471 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:38:16.471 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:38:16.471 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:38:16.471 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:38:16.471 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:38:16.471 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:38:16.471 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:38:16.471 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:38:16.471 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:38:16.471 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:38:16.471 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:38:16.471 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:38:16.731 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:38:16.731 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:38:16.731 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:38:16.731 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:38:16.732 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:38:16.732 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:38:16.732 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:38:16.732 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:38:16.732 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:38:16.732 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:38:16.732 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:38:16.732 00:38:16.732 --- 10.0.0.3 ping statistics --- 00:38:16.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:16.732 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:38:16.732 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:38:16.732 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:38:16.732 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:38:16.732 00:38:16.732 --- 10.0.0.4 ping statistics --- 00:38:16.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:16.732 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:38:16.732 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:38:16.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:16.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:38:16.732 00:38:16.732 --- 10.0.0.1 ping statistics --- 00:38:16.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:16.732 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:38:16.732 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:38:16.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:16.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:38:16.732 00:38:16.732 --- 10.0.0.2 ping statistics --- 00:38:16.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:16.732 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:38:16.732 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:16.732 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@459 -- # return 0 00:38:16.732 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:16.732 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:16.732 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:16.732 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:16.732 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:16.732 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:16.732 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:16.732 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:38:16.732 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:16.732 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:16.732 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:16.732 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # nvmfpid=85644 00:38:16.732 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # waitforlisten 85644 00:38:16.732 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:38:16.732 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 85644 ']' 00:38:16.732 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:16.732 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:16.732 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:16.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:16.732 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:16.732 20:37:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:16.732 [2024-10-01 20:37:11.881682] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:38:16.732 [2024-10-01 20:37:11.881777] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:16.991 [2024-10-01 20:37:12.021507] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:16.992 [2024-10-01 20:37:12.093023] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:16.992 [2024-10-01 20:37:12.093086] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:16.992 [2024-10-01 20:37:12.093101] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:16.992 [2024-10-01 20:37:12.093124] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:16.992 [2024-10-01 20:37:12.093134] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:16.992 [2024-10-01 20:37:12.093695] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:38:16.992 [2024-10-01 20:37:12.093853] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:38:16.992 [2024-10-01 20:37:12.093874] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:16.992 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:16.992 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:38:16.992 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:16.992 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:16.992 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:17.251 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:17.251 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:17.251 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.251 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:17.251 [2024-10-01 20:37:12.287508] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:17.251 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.251 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:17.251 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.251 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:17.251 Malloc0 00:38:17.251 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.251 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:17.251 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.251 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:17.251 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.251 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:17.251 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.251 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:17.251 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.251 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:38:17.251 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.251 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:17.251 [2024-10-01 20:37:12.336273] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:38:17.251 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.252 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:38:17.252 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.252 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:17.252 [2024-10-01 20:37:12.344204] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:38:17.252 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.252 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:38:17.252 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.252 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:17.252 Malloc1 00:38:17.252 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.252 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:38:17.252 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.252 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:17.252 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.252 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:38:17.252 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.252 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:17.252 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.252 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:38:17.252 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.252 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:17.252 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.252 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4421 00:38:17.252 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.252 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:17.252 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.252 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=85682 00:38:17.252 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:38:17.252 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:17.252 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 85682 /var/tmp/bdevperf.sock 00:38:17.252 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 85682 ']' 00:38:17.252 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:17.252 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:17.252 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:17.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:17.252 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:17.252 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:17.822 NVMe0n1 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.822 1 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:17.822 2024/10/01 20:37:12 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 hostnqn:nqn.2021-09-7.io.spdk:00001 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:38:17.822 request: 00:38:17.822 { 00:38:17.822 "method": "bdev_nvme_attach_controller", 00:38:17.822 "params": { 00:38:17.822 "name": "NVMe0", 00:38:17.822 "trtype": "tcp", 00:38:17.822 "traddr": "10.0.0.3", 00:38:17.822 "adrfam": "ipv4", 00:38:17.822 "trsvcid": "4420", 00:38:17.822 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:17.822 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:38:17.822 "hostaddr": "10.0.0.1", 00:38:17.822 "prchk_reftag": false, 00:38:17.822 "prchk_guard": false, 00:38:17.822 "hdgst": false, 00:38:17.822 "ddgst": false, 00:38:17.822 "allow_unrecognized_csi": false 00:38:17.822 } 00:38:17.822 } 00:38:17.822 Got JSON-RPC error response 00:38:17.822 GoRPCClient: error on JSON-RPC call 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:17.822 2024/10/01 20:37:12 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:38:17.822 request: 00:38:17.822 { 00:38:17.822 "method": "bdev_nvme_attach_controller", 00:38:17.822 "params": { 00:38:17.822 "name": "NVMe0", 00:38:17.822 "trtype": "tcp", 00:38:17.822 "traddr": "10.0.0.3", 00:38:17.822 "adrfam": "ipv4", 00:38:17.822 "trsvcid": "4420", 00:38:17.822 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:38:17.822 "hostaddr": "10.0.0.1", 00:38:17.822 "prchk_reftag": false, 00:38:17.822 "prchk_guard": false, 00:38:17.822 "hdgst": false, 00:38:17.822 "ddgst": false, 00:38:17.822 "allow_unrecognized_csi": false 00:38:17.822 } 00:38:17.822 } 00:38:17.822 Got JSON-RPC error response 00:38:17.822 GoRPCClient: error on JSON-RPC call 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:38:17.822 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:17.823 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:38:17.823 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.823 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:17.823 2024/10/01 20:37:12 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:38:17.823 request: 00:38:17.823 { 00:38:17.823 "method": "bdev_nvme_attach_controller", 00:38:17.823 "params": { 00:38:17.823 "name": "NVMe0", 00:38:17.823 "trtype": "tcp", 00:38:17.823 "traddr": "10.0.0.3", 00:38:17.823 "adrfam": "ipv4", 00:38:17.823 "trsvcid": "4420", 00:38:17.823 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:17.823 "hostaddr": "10.0.0.1", 00:38:17.823 "prchk_reftag": false, 00:38:17.823 "prchk_guard": false, 00:38:17.823 "hdgst": false, 00:38:17.823 "ddgst": false, 00:38:17.823 "multipath": "disable", 00:38:17.823 "allow_unrecognized_csi": false 00:38:17.823 } 00:38:17.823 } 00:38:17.823 Got JSON-RPC error response 00:38:17.823 GoRPCClient: error on JSON-RPC call 00:38:17.823 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:38:17.823 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:38:17.823 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:17.823 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:17.823 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:17.823 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:38:17.823 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:38:17.823 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:38:17.823 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:38:17.823 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:17.823 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:38:17.823 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:17.823 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:38:17.823 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.823 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:17.823 2024/10/01 20:37:12 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:38:17.823 request: 00:38:17.823 { 00:38:17.823 "method": "bdev_nvme_attach_controller", 00:38:17.823 "params": { 00:38:17.823 "name": "NVMe0", 00:38:17.823 "trtype": "tcp", 00:38:17.823 "traddr": "10.0.0.3", 00:38:17.823 "adrfam": "ipv4", 00:38:17.823 "trsvcid": "4420", 00:38:17.823 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:17.823 "hostaddr": "10.0.0.1", 00:38:17.823 "prchk_reftag": false, 00:38:17.823 "prchk_guard": false, 00:38:17.823 "hdgst": false, 00:38:17.823 "ddgst": false, 00:38:17.823 "multipath": "failover", 00:38:17.823 "allow_unrecognized_csi": false 00:38:17.823 } 00:38:17.823 } 00:38:17.823 Got JSON-RPC error response 00:38:17.823 GoRPCClient: error on JSON-RPC call 00:38:17.823 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:38:17.823 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:38:17.823 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:17.823 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:17.823 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:17.823 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:38:17.823 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.823 20:37:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:17.823 00:38:17.823 20:37:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.823 20:37:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:38:17.823 20:37:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.823 20:37:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:17.823 20:37:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.823 20:37:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:38:17.823 20:37:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.823 20:37:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:18.082 00:38:18.082 20:37:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:18.082 20:37:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:38:18.082 20:37:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:18.082 20:37:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:38:18.082 20:37:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:18.082 20:37:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:18.082 20:37:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:38:18.082 20:37:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:19.020 { 00:38:19.020 "results": [ 00:38:19.020 { 00:38:19.020 "job": "NVMe0n1", 00:38:19.020 "core_mask": "0x1", 00:38:19.020 "workload": "write", 00:38:19.020 "status": "finished", 00:38:19.020 "queue_depth": 128, 00:38:19.020 "io_size": 4096, 00:38:19.020 "runtime": 1.0041, 00:38:19.020 "iops": 21791.6542177074, 00:38:19.020 "mibps": 85.12364928791953, 00:38:19.020 "io_failed": 0, 00:38:19.020 "io_timeout": 0, 00:38:19.020 "avg_latency_us": 5865.391302209056, 00:38:19.020 "min_latency_us": 1750.1090909090908, 00:38:19.020 "max_latency_us": 10902.807272727272 00:38:19.020 } 00:38:19.020 ], 00:38:19.020 "core_count": 1 00:38:19.020 } 00:38:19.279 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:38:19.279 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:19.279 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:19.279 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:19.279 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n 10.0.0.2 ]] 00:38:19.279 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:38:19.279 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:19.279 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:19.279 nvme1n1 00:38:19.279 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:19.279 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:38:19.279 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # jq -r '.[].peer_address.traddr' 00:38:19.279 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:19.279 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:19.279 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:19.279 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # [[ 10.0.0.1 == \1\0\.\0\.\0\.\1 ]] 00:38:19.279 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller nvme1 00:38:19.279 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:19.279 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:19.279 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:19.279 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@109 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 00:38:19.279 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:19.279 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:19.279 nvme1n1 00:38:19.279 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:19.279 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:38:19.279 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:19.279 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # jq -r '.[].peer_address.traddr' 00:38:19.279 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:19.538 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:19.538 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # [[ 10.0.0.2 == \1\0\.\0\.\0\.\2 ]] 00:38:19.538 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 85682 00:38:19.538 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 85682 ']' 00:38:19.538 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 85682 00:38:19.538 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:38:19.538 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:19.538 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85682 00:38:19.538 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:19.538 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:19.538 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85682' 00:38:19.538 killing process with pid 85682 00:38:19.538 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 85682 00:38:19.538 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 85682 00:38:19.798 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:19.798 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:19.798 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:19.798 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:19.798 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:38:19.798 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:19.798 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:19.798 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:19.798 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:38:19.798 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:38:19.798 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:38:19.798 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:38:19.798 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:38:19.798 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:38:19.798 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:38:19.798 [2024-10-01 20:37:12.458358] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:38:19.798 [2024-10-01 20:37:12.458471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85682 ] 00:38:19.798 [2024-10-01 20:37:12.599206] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:19.798 [2024-10-01 20:37:12.668863] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:19.798 [2024-10-01 20:37:13.105966] bdev.c:4724:bdev_name_add: *ERROR*: Bdev name 5043a7c7-b1d5-473c-bb96-d1283ae2bc46 already exists 00:38:19.798 [2024-10-01 20:37:13.106013] bdev.c:7885:bdev_register: *ERROR*: Unable to add uuid:5043a7c7-b1d5-473c-bb96-d1283ae2bc46 alias for bdev NVMe1n1 00:38:19.798 [2024-10-01 20:37:13.106046] bdev_nvme.c:4481:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:38:19.798 Running I/O for 1 seconds... 00:38:19.798 21753.00 IOPS, 84.97 MiB/s 00:38:19.798 Latency(us) 00:38:19.798 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:19.798 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:38:19.798 NVMe0n1 : 1.00 21791.65 85.12 0.00 0.00 5865.39 1750.11 10902.81 00:38:19.798 =================================================================================================================== 00:38:19.798 Total : 21791.65 85.12 0.00 0.00 5865.39 1750.11 10902.81 00:38:19.798 Received shutdown signal, test time was about 1.000000 seconds 00:38:19.798 00:38:19.798 Latency(us) 00:38:19.798 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:19.798 =================================================================================================================== 00:38:19.798 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:19.798 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:38:19.798 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:38:19.798 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:38:19.798 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:38:19.798 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:19.798 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:38:19.798 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:19.798 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:38:19.798 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:19.798 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:19.798 rmmod nvme_tcp 00:38:19.798 rmmod nvme_fabrics 00:38:19.798 rmmod nvme_keyring 00:38:19.798 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:19.798 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:38:19.798 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:38:19.798 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@515 -- # '[' -n 85644 ']' 00:38:19.798 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # killprocess 85644 00:38:19.798 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 85644 ']' 00:38:19.798 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 85644 00:38:19.798 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:38:19.798 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:19.799 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85644 00:38:19.799 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:19.799 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:19.799 killing process with pid 85644 00:38:19.799 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85644' 00:38:19.799 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 85644 00:38:19.799 20:37:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 85644 00:38:20.058 20:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:20.058 20:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:20.058 20:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:20.058 20:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:38:20.058 20:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-save 00:38:20.058 20:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:20.058 20:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-restore 00:38:20.058 20:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:20.058 20:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:38:20.058 20:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:38:20.058 20:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:38:20.058 20:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:38:20.058 20:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:38:20.058 20:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:38:20.058 20:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:38:20.058 20:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:38:20.058 20:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:38:20.058 20:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:38:20.058 20:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:38:20.317 20:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:38:20.317 20:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:20.317 20:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:20.317 20:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@246 -- # remove_spdk_ns 00:38:20.317 20:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:20.317 20:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:20.317 20:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:20.317 20:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@300 -- # return 0 00:38:20.317 00:38:20.317 real 0m4.241s 00:38:20.317 user 0m11.890s 00:38:20.317 sys 0m1.152s 00:38:20.317 ************************************ 00:38:20.317 END TEST nvmf_multicontroller 00:38:20.317 ************************************ 00:38:20.317 20:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:20.317 20:37:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:20.317 20:37:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:38:20.317 20:37:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:38:20.317 20:37:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:20.318 20:37:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:20.318 ************************************ 00:38:20.318 START TEST nvmf_aer 00:38:20.318 ************************************ 00:38:20.318 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:38:20.318 * Looking for test storage... 00:38:20.318 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:38:20.318 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:20.318 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lcov --version 00:38:20.318 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:20.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:20.578 --rc genhtml_branch_coverage=1 00:38:20.578 --rc genhtml_function_coverage=1 00:38:20.578 --rc genhtml_legend=1 00:38:20.578 --rc geninfo_all_blocks=1 00:38:20.578 --rc geninfo_unexecuted_blocks=1 00:38:20.578 00:38:20.578 ' 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:20.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:20.578 --rc genhtml_branch_coverage=1 00:38:20.578 --rc genhtml_function_coverage=1 00:38:20.578 --rc genhtml_legend=1 00:38:20.578 --rc geninfo_all_blocks=1 00:38:20.578 --rc geninfo_unexecuted_blocks=1 00:38:20.578 00:38:20.578 ' 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:20.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:20.578 --rc genhtml_branch_coverage=1 00:38:20.578 --rc genhtml_function_coverage=1 00:38:20.578 --rc genhtml_legend=1 00:38:20.578 --rc geninfo_all_blocks=1 00:38:20.578 --rc geninfo_unexecuted_blocks=1 00:38:20.578 00:38:20.578 ' 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:20.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:20.578 --rc genhtml_branch_coverage=1 00:38:20.578 --rc genhtml_function_coverage=1 00:38:20.578 --rc genhtml_legend=1 00:38:20.578 --rc geninfo_all_blocks=1 00:38:20.578 --rc geninfo_unexecuted_blocks=1 00:38:20.578 00:38:20.578 ' 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:20.578 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:20.579 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@458 -- # nvmf_veth_init 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:38:20.579 Cannot find device "nvmf_init_br" 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # true 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:38:20.579 Cannot find device "nvmf_init_br2" 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # true 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:38:20.579 Cannot find device "nvmf_tgt_br" 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # true 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:38:20.579 Cannot find device "nvmf_tgt_br2" 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # true 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:38:20.579 Cannot find device "nvmf_init_br" 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # true 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:38:20.579 Cannot find device "nvmf_init_br2" 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # true 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:38:20.579 Cannot find device "nvmf_tgt_br" 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # true 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:38:20.579 Cannot find device "nvmf_tgt_br2" 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # true 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:38:20.579 Cannot find device "nvmf_br" 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # true 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:38:20.579 Cannot find device "nvmf_init_if" 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # true 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:38:20.579 Cannot find device "nvmf_init_if2" 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # true 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:20.579 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # true 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:20.579 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # true 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:38:20.579 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:38:20.839 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:38:20.839 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:38:20.839 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:38:20.839 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:38:20.839 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:38:20.839 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:38:20.839 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:38:20.839 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:38:20.839 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:38:20.839 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:38:20.839 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:38:20.839 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:38:20.839 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:38:20.839 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:38:20.839 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:38:20.839 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:38:20.839 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:38:20.839 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:38:20.839 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:38:20.839 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:38:20.839 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:38:20.839 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:38:20.839 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:38:20.839 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:38:20.839 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:38:20.839 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:38:20.839 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:38:20.839 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:38:20.839 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:38:20.839 00:38:20.839 --- 10.0.0.3 ping statistics --- 00:38:20.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:20.839 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:38:20.839 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:38:20.839 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:38:20.839 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:38:20.839 00:38:20.839 --- 10.0.0.4 ping statistics --- 00:38:20.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:20.839 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:38:20.839 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:38:20.839 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:20.839 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:38:20.839 00:38:20.839 --- 10.0.0.1 ping statistics --- 00:38:20.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:20.839 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:38:20.839 20:37:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:38:20.839 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:20.839 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:38:20.839 00:38:20.839 --- 10.0.0.2 ping statistics --- 00:38:20.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:20.839 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:38:20.839 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:20.839 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@459 -- # return 0 00:38:20.839 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:20.839 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:20.839 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:20.839 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:20.839 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:20.839 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:20.839 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:20.839 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:38:20.839 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:20.839 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:20.839 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:20.839 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # nvmfpid=85990 00:38:20.839 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:38:20.839 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # waitforlisten 85990 00:38:20.839 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 85990 ']' 00:38:20.839 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:20.839 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:20.839 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:20.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:20.839 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:20.839 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:21.099 [2024-10-01 20:37:16.094185] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:38:21.099 [2024-10-01 20:37:16.094276] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:21.099 [2024-10-01 20:37:16.238991] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:21.099 [2024-10-01 20:37:16.312271] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:21.099 [2024-10-01 20:37:16.312336] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:21.099 [2024-10-01 20:37:16.312350] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:21.099 [2024-10-01 20:37:16.312361] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:21.099 [2024-10-01 20:37:16.312370] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:21.099 [2024-10-01 20:37:16.313091] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:21.099 [2024-10-01 20:37:16.313296] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:38:21.099 [2024-10-01 20:37:16.313364] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:21.099 [2024-10-01 20:37:16.313350] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:38:21.358 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:21.358 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:38:21.358 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:21.358 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:21.358 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:21.358 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:21.358 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:21.358 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:21.358 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:21.358 [2024-10-01 20:37:16.514849] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:21.358 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:21.358 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:38:21.358 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:21.358 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:21.358 Malloc0 00:38:21.358 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:21.358 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:38:21.358 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:21.358 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:21.358 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:21.358 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:21.358 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:21.358 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:21.358 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:21.359 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:38:21.359 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:21.359 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:21.359 [2024-10-01 20:37:16.563226] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:38:21.359 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:21.359 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:38:21.359 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:21.359 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:21.359 [ 00:38:21.359 { 00:38:21.359 "allow_any_host": true, 00:38:21.359 "hosts": [], 00:38:21.359 "listen_addresses": [], 00:38:21.359 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:38:21.359 "subtype": "Discovery" 00:38:21.359 }, 00:38:21.359 { 00:38:21.359 "allow_any_host": true, 00:38:21.359 "hosts": [], 00:38:21.359 "listen_addresses": [ 00:38:21.359 { 00:38:21.359 "adrfam": "IPv4", 00:38:21.359 "traddr": "10.0.0.3", 00:38:21.359 "trsvcid": "4420", 00:38:21.359 "trtype": "TCP" 00:38:21.359 } 00:38:21.359 ], 00:38:21.359 "max_cntlid": 65519, 00:38:21.359 "max_namespaces": 2, 00:38:21.359 "min_cntlid": 1, 00:38:21.359 "model_number": "SPDK bdev Controller", 00:38:21.359 "namespaces": [ 00:38:21.359 { 00:38:21.359 "bdev_name": "Malloc0", 00:38:21.359 "name": "Malloc0", 00:38:21.359 "nguid": "DD4AAED388464169A9802CA0EFAC2EE2", 00:38:21.359 "nsid": 1, 00:38:21.359 "uuid": "dd4aaed3-8846-4169-a980-2ca0efac2ee2" 00:38:21.359 } 00:38:21.359 ], 00:38:21.359 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:21.359 "serial_number": "SPDK00000000000001", 00:38:21.359 "subtype": "NVMe" 00:38:21.359 } 00:38:21.359 ] 00:38:21.359 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:21.359 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:38:21.359 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:38:21.359 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=86025 00:38:21.359 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:38:21.359 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:38:21.359 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:38:21.359 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:38:21.359 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:38:21.359 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:38:21.359 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:38:21.618 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:38:21.618 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:38:21.618 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:38:21.618 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:38:21.618 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:38:21.618 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:38:21.618 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:38:21.618 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:38:21.618 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:21.618 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:21.618 Malloc1 00:38:21.618 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:21.618 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:38:21.618 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:21.618 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:21.618 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:21.618 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:38:21.618 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:21.618 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:21.618 [ 00:38:21.618 { 00:38:21.618 "allow_any_host": true, 00:38:21.618 "hosts": [], 00:38:21.618 "listen_addresses": [], 00:38:21.618 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:38:21.618 "subtype": "Discovery" 00:38:21.618 }, 00:38:21.618 { 00:38:21.618 "allow_any_host": true, 00:38:21.618 "hosts": [], 00:38:21.618 "listen_addresses": [ 00:38:21.618 { 00:38:21.618 "adrfam": "IPv4", 00:38:21.618 "traddr": "10.0.0.3", 00:38:21.618 "trsvcid": "4420", 00:38:21.618 "trtype": "TCP" 00:38:21.618 } 00:38:21.618 ], 00:38:21.618 "max_cntlid": 65519, 00:38:21.618 "max_namespaces": 2, 00:38:21.618 "min_cntlid": 1, 00:38:21.618 "model_number": "SPDK bdev Controller", 00:38:21.618 "namespaces": [ 00:38:21.618 { 00:38:21.618 "bdev_name": "Malloc0", 00:38:21.618 "name": "Malloc0", 00:38:21.618 "nguid": "DD4AAED388464169A9802CA0EFAC2EE2", 00:38:21.618 "nsid": 1, 00:38:21.618 "uuid": "dd4aaed3-8846-4169-a980-2ca0efac2ee2" 00:38:21.618 }, 00:38:21.618 { 00:38:21.618 "bdev_name": "Malloc1", 00:38:21.618 "name": "Malloc1", 00:38:21.618 "nguid": "9DF2975909BA431D8DA3F9C5F0EC3914", 00:38:21.618 "nsid": 2, 00:38:21.618 "uuid": "9df29759-09ba-431d-8da3-f9c5f0ec3914" 00:38:21.618 } 00:38:21.618 ], 00:38:21.618 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:21.618 "serial_number": "SPDK00000000000001", 00:38:21.618 "subtype": "NVMe" 00:38:21.618 } 00:38:21.619 ] 00:38:21.619 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:21.619 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 86025 00:38:21.619 Asynchronous Event Request test 00:38:21.619 Attaching to 10.0.0.3 00:38:21.619 Attached to 10.0.0.3 00:38:21.619 Registering asynchronous event callbacks... 00:38:21.619 Starting namespace attribute notice tests for all controllers... 00:38:21.619 10.0.0.3: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:38:21.619 aer_cb - Changed Namespace 00:38:21.619 Cleaning up... 00:38:21.619 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:38:21.619 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:21.619 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:21.878 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:21.878 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:38:21.878 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:21.878 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:21.878 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:21.878 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:21.878 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:21.878 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:21.878 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:21.878 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:38:21.878 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:38:21.878 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:21.878 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:38:21.878 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:21.878 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:38:21.878 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:21.878 20:37:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:21.878 rmmod nvme_tcp 00:38:21.878 rmmod nvme_fabrics 00:38:21.878 rmmod nvme_keyring 00:38:21.878 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:21.878 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:38:21.878 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:38:21.878 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@515 -- # '[' -n 85990 ']' 00:38:21.878 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # killprocess 85990 00:38:21.878 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 85990 ']' 00:38:21.878 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 85990 00:38:21.878 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:38:21.878 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:21.879 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85990 00:38:21.879 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:21.879 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:21.879 killing process with pid 85990 00:38:21.879 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85990' 00:38:21.879 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 85990 00:38:21.879 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 85990 00:38:22.138 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:22.138 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:22.138 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:22.138 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:38:22.138 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-save 00:38:22.138 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:22.138 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-restore 00:38:22.138 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:22.138 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:38:22.138 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:38:22.139 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:38:22.139 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:38:22.139 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:38:22.139 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:38:22.139 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:38:22.139 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:38:22.139 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:38:22.139 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:38:22.398 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:38:22.398 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:38:22.398 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:22.398 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:22.398 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@246 -- # remove_spdk_ns 00:38:22.398 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:22.398 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:22.398 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:22.398 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@300 -- # return 0 00:38:22.398 00:38:22.398 real 0m2.068s 00:38:22.398 user 0m3.902s 00:38:22.398 sys 0m0.753s 00:38:22.398 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:22.398 20:37:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:22.398 ************************************ 00:38:22.398 END TEST nvmf_aer 00:38:22.398 ************************************ 00:38:22.398 20:37:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:38:22.398 20:37:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:38:22.398 20:37:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:22.398 20:37:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:22.398 ************************************ 00:38:22.398 START TEST nvmf_async_init 00:38:22.398 ************************************ 00:38:22.398 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:38:22.658 * Looking for test storage... 00:38:22.658 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:38:22.658 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:22.658 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lcov --version 00:38:22.658 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:22.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:22.659 --rc genhtml_branch_coverage=1 00:38:22.659 --rc genhtml_function_coverage=1 00:38:22.659 --rc genhtml_legend=1 00:38:22.659 --rc geninfo_all_blocks=1 00:38:22.659 --rc geninfo_unexecuted_blocks=1 00:38:22.659 00:38:22.659 ' 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:22.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:22.659 --rc genhtml_branch_coverage=1 00:38:22.659 --rc genhtml_function_coverage=1 00:38:22.659 --rc genhtml_legend=1 00:38:22.659 --rc geninfo_all_blocks=1 00:38:22.659 --rc geninfo_unexecuted_blocks=1 00:38:22.659 00:38:22.659 ' 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:22.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:22.659 --rc genhtml_branch_coverage=1 00:38:22.659 --rc genhtml_function_coverage=1 00:38:22.659 --rc genhtml_legend=1 00:38:22.659 --rc geninfo_all_blocks=1 00:38:22.659 --rc geninfo_unexecuted_blocks=1 00:38:22.659 00:38:22.659 ' 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:22.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:22.659 --rc genhtml_branch_coverage=1 00:38:22.659 --rc genhtml_function_coverage=1 00:38:22.659 --rc genhtml_legend=1 00:38:22.659 --rc geninfo_all_blocks=1 00:38:22.659 --rc geninfo_unexecuted_blocks=1 00:38:22.659 00:38:22.659 ' 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:22.659 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:22.660 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=19c106f1de784894b6cfd0addda08c3d 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@458 -- # nvmf_veth_init 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:38:22.660 Cannot find device "nvmf_init_br" 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:38:22.660 Cannot find device "nvmf_init_br2" 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:38:22.660 Cannot find device "nvmf_tgt_br" 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # true 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:38:22.660 Cannot find device "nvmf_tgt_br2" 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # true 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:38:22.660 Cannot find device "nvmf_init_br" 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # true 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:38:22.660 Cannot find device "nvmf_init_br2" 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # true 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:38:22.660 Cannot find device "nvmf_tgt_br" 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # true 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:38:22.660 Cannot find device "nvmf_tgt_br2" 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # true 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:38:22.660 Cannot find device "nvmf_br" 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # true 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:38:22.660 Cannot find device "nvmf_init_if" 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # true 00:38:22.660 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:38:22.919 Cannot find device "nvmf_init_if2" 00:38:22.919 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # true 00:38:22.919 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:22.919 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:22.919 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # true 00:38:22.919 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:22.919 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:22.919 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # true 00:38:22.919 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:38:22.919 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:38:22.919 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:38:22.919 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:38:22.919 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:38:22.919 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:38:22.919 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:38:22.919 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:38:22.919 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:38:22.920 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:38:22.920 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:38:22.920 20:37:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:38:22.920 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:38:22.920 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:38:22.920 00:38:22.920 --- 10.0.0.3 ping statistics --- 00:38:22.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:22.920 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:38:22.920 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:38:22.920 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:38:22.920 00:38:22.920 --- 10.0.0.4 ping statistics --- 00:38:22.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:22.920 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:38:22.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:22.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:38:22.920 00:38:22.920 --- 10.0.0.1 ping statistics --- 00:38:22.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:22.920 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:38:22.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:22.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:38:22.920 00:38:22.920 --- 10.0.0.2 ping statistics --- 00:38:22.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:22.920 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@459 -- # return 0 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # nvmfpid=86255 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # waitforlisten 86255 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 86255 ']' 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:22.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:22.920 20:37:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:23.179 [2024-10-01 20:37:18.214501] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:38:23.179 [2024-10-01 20:37:18.214601] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:23.179 [2024-10-01 20:37:18.348615] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:23.179 [2024-10-01 20:37:18.400479] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:23.179 [2024-10-01 20:37:18.400544] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:23.179 [2024-10-01 20:37:18.400570] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:23.179 [2024-10-01 20:37:18.400577] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:23.179 [2024-10-01 20:37:18.400582] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:23.179 [2024-10-01 20:37:18.400610] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:24.116 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:24.116 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:38:24.117 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:24.117 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:24.117 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:24.117 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:24.117 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:38:24.117 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.117 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:24.117 [2024-10-01 20:37:19.176563] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:24.117 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.117 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:38:24.117 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.117 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:24.117 null0 00:38:24.117 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.117 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:38:24.117 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.117 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:24.117 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.117 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:38:24.117 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.117 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:24.117 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.117 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 19c106f1de784894b6cfd0addda08c3d 00:38:24.117 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.117 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:24.117 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.117 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:38:24.117 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.117 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:24.117 [2024-10-01 20:37:19.216683] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:38:24.117 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.117 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:38:24.117 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.117 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:24.376 nvme0n1 00:38:24.376 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.376 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:38:24.376 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.376 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:24.376 [ 00:38:24.376 { 00:38:24.376 "aliases": [ 00:38:24.376 "19c106f1-de78-4894-b6cf-d0addda08c3d" 00:38:24.376 ], 00:38:24.376 "assigned_rate_limits": { 00:38:24.376 "r_mbytes_per_sec": 0, 00:38:24.376 "rw_ios_per_sec": 0, 00:38:24.376 "rw_mbytes_per_sec": 0, 00:38:24.376 "w_mbytes_per_sec": 0 00:38:24.376 }, 00:38:24.376 "block_size": 512, 00:38:24.376 "claimed": false, 00:38:24.376 "driver_specific": { 00:38:24.376 "mp_policy": "active_passive", 00:38:24.376 "nvme": [ 00:38:24.376 { 00:38:24.376 "ctrlr_data": { 00:38:24.376 "ana_reporting": false, 00:38:24.376 "cntlid": 1, 00:38:24.376 "firmware_revision": "25.01", 00:38:24.376 "model_number": "SPDK bdev Controller", 00:38:24.376 "multi_ctrlr": true, 00:38:24.376 "oacs": { 00:38:24.376 "firmware": 0, 00:38:24.376 "format": 0, 00:38:24.376 "ns_manage": 0, 00:38:24.376 "security": 0 00:38:24.376 }, 00:38:24.376 "serial_number": "00000000000000000000", 00:38:24.376 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:24.376 "vendor_id": "0x8086" 00:38:24.376 }, 00:38:24.376 "ns_data": { 00:38:24.376 "can_share": true, 00:38:24.376 "id": 1 00:38:24.376 }, 00:38:24.376 "trid": { 00:38:24.376 "adrfam": "IPv4", 00:38:24.376 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:24.376 "traddr": "10.0.0.3", 00:38:24.376 "trsvcid": "4420", 00:38:24.376 "trtype": "TCP" 00:38:24.376 }, 00:38:24.376 "vs": { 00:38:24.376 "nvme_version": "1.3" 00:38:24.376 } 00:38:24.376 } 00:38:24.376 ] 00:38:24.376 }, 00:38:24.376 "memory_domains": [ 00:38:24.376 { 00:38:24.376 "dma_device_id": "system", 00:38:24.376 "dma_device_type": 1 00:38:24.376 } 00:38:24.376 ], 00:38:24.376 "name": "nvme0n1", 00:38:24.376 "num_blocks": 2097152, 00:38:24.376 "numa_id": -1, 00:38:24.376 "product_name": "NVMe disk", 00:38:24.376 "supported_io_types": { 00:38:24.376 "abort": true, 00:38:24.376 "compare": true, 00:38:24.376 "compare_and_write": true, 00:38:24.376 "copy": true, 00:38:24.376 "flush": true, 00:38:24.376 "get_zone_info": false, 00:38:24.376 "nvme_admin": true, 00:38:24.376 "nvme_io": true, 00:38:24.376 "nvme_io_md": false, 00:38:24.376 "nvme_iov_md": false, 00:38:24.376 "read": true, 00:38:24.376 "reset": true, 00:38:24.376 "seek_data": false, 00:38:24.376 "seek_hole": false, 00:38:24.376 "unmap": false, 00:38:24.376 "write": true, 00:38:24.376 "write_zeroes": true, 00:38:24.376 "zcopy": false, 00:38:24.376 "zone_append": false, 00:38:24.376 "zone_management": false 00:38:24.376 }, 00:38:24.376 "uuid": "19c106f1-de78-4894-b6cf-d0addda08c3d", 00:38:24.376 "zoned": false 00:38:24.376 } 00:38:24.376 ] 00:38:24.376 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.376 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:38:24.376 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.376 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:24.376 [2024-10-01 20:37:19.481761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:38:24.376 [2024-10-01 20:37:19.482028] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1300d80 (9): Bad file descriptor 00:38:24.376 [2024-10-01 20:37:19.614233] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:38:24.377 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.377 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:38:24.377 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.377 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:24.636 [ 00:38:24.636 { 00:38:24.636 "aliases": [ 00:38:24.636 "19c106f1-de78-4894-b6cf-d0addda08c3d" 00:38:24.636 ], 00:38:24.636 "assigned_rate_limits": { 00:38:24.636 "r_mbytes_per_sec": 0, 00:38:24.636 "rw_ios_per_sec": 0, 00:38:24.636 "rw_mbytes_per_sec": 0, 00:38:24.636 "w_mbytes_per_sec": 0 00:38:24.636 }, 00:38:24.636 "block_size": 512, 00:38:24.636 "claimed": false, 00:38:24.636 "driver_specific": { 00:38:24.636 "mp_policy": "active_passive", 00:38:24.636 "nvme": [ 00:38:24.636 { 00:38:24.636 "ctrlr_data": { 00:38:24.636 "ana_reporting": false, 00:38:24.636 "cntlid": 2, 00:38:24.636 "firmware_revision": "25.01", 00:38:24.636 "model_number": "SPDK bdev Controller", 00:38:24.636 "multi_ctrlr": true, 00:38:24.636 "oacs": { 00:38:24.636 "firmware": 0, 00:38:24.636 "format": 0, 00:38:24.636 "ns_manage": 0, 00:38:24.636 "security": 0 00:38:24.636 }, 00:38:24.636 "serial_number": "00000000000000000000", 00:38:24.636 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:24.636 "vendor_id": "0x8086" 00:38:24.636 }, 00:38:24.636 "ns_data": { 00:38:24.636 "can_share": true, 00:38:24.636 "id": 1 00:38:24.636 }, 00:38:24.636 "trid": { 00:38:24.636 "adrfam": "IPv4", 00:38:24.636 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:24.636 "traddr": "10.0.0.3", 00:38:24.636 "trsvcid": "4420", 00:38:24.636 "trtype": "TCP" 00:38:24.636 }, 00:38:24.636 "vs": { 00:38:24.636 "nvme_version": "1.3" 00:38:24.636 } 00:38:24.636 } 00:38:24.636 ] 00:38:24.636 }, 00:38:24.636 "memory_domains": [ 00:38:24.636 { 00:38:24.636 "dma_device_id": "system", 00:38:24.636 "dma_device_type": 1 00:38:24.636 } 00:38:24.636 ], 00:38:24.636 "name": "nvme0n1", 00:38:24.636 "num_blocks": 2097152, 00:38:24.636 "numa_id": -1, 00:38:24.636 "product_name": "NVMe disk", 00:38:24.636 "supported_io_types": { 00:38:24.636 "abort": true, 00:38:24.636 "compare": true, 00:38:24.636 "compare_and_write": true, 00:38:24.636 "copy": true, 00:38:24.636 "flush": true, 00:38:24.636 "get_zone_info": false, 00:38:24.636 "nvme_admin": true, 00:38:24.636 "nvme_io": true, 00:38:24.636 "nvme_io_md": false, 00:38:24.636 "nvme_iov_md": false, 00:38:24.636 "read": true, 00:38:24.636 "reset": true, 00:38:24.636 "seek_data": false, 00:38:24.636 "seek_hole": false, 00:38:24.636 "unmap": false, 00:38:24.636 "write": true, 00:38:24.636 "write_zeroes": true, 00:38:24.636 "zcopy": false, 00:38:24.636 "zone_append": false, 00:38:24.636 "zone_management": false 00:38:24.636 }, 00:38:24.636 "uuid": "19c106f1-de78-4894-b6cf-d0addda08c3d", 00:38:24.636 "zoned": false 00:38:24.636 } 00:38:24.636 ] 00:38:24.636 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.636 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:24.636 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.636 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:24.636 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.636 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:38:24.636 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.fKflbG89NG 00:38:24.636 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:38:24.636 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.fKflbG89NG 00:38:24.636 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.fKflbG89NG 00:38:24.636 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.636 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:24.636 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.636 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:38:24.636 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.636 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:24.636 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.636 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 --secure-channel 00:38:24.636 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.636 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:24.636 [2024-10-01 20:37:19.701914] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:24.636 [2024-10-01 20:37:19.702049] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:38:24.636 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.636 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:38:24.636 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.636 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:24.636 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.636 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:38:24.636 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.636 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:24.636 [2024-10-01 20:37:19.717929] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:24.636 nvme0n1 00:38:24.636 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.636 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:38:24.636 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.636 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:24.637 [ 00:38:24.637 { 00:38:24.637 "aliases": [ 00:38:24.637 "19c106f1-de78-4894-b6cf-d0addda08c3d" 00:38:24.637 ], 00:38:24.637 "assigned_rate_limits": { 00:38:24.637 "r_mbytes_per_sec": 0, 00:38:24.637 "rw_ios_per_sec": 0, 00:38:24.637 "rw_mbytes_per_sec": 0, 00:38:24.637 "w_mbytes_per_sec": 0 00:38:24.637 }, 00:38:24.637 "block_size": 512, 00:38:24.637 "claimed": false, 00:38:24.637 "driver_specific": { 00:38:24.637 "mp_policy": "active_passive", 00:38:24.637 "nvme": [ 00:38:24.637 { 00:38:24.637 "ctrlr_data": { 00:38:24.637 "ana_reporting": false, 00:38:24.637 "cntlid": 3, 00:38:24.637 "firmware_revision": "25.01", 00:38:24.637 "model_number": "SPDK bdev Controller", 00:38:24.637 "multi_ctrlr": true, 00:38:24.637 "oacs": { 00:38:24.637 "firmware": 0, 00:38:24.637 "format": 0, 00:38:24.637 "ns_manage": 0, 00:38:24.637 "security": 0 00:38:24.637 }, 00:38:24.637 "serial_number": "00000000000000000000", 00:38:24.637 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:24.637 "vendor_id": "0x8086" 00:38:24.637 }, 00:38:24.637 "ns_data": { 00:38:24.637 "can_share": true, 00:38:24.637 "id": 1 00:38:24.637 }, 00:38:24.637 "trid": { 00:38:24.637 "adrfam": "IPv4", 00:38:24.637 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:24.637 "traddr": "10.0.0.3", 00:38:24.637 "trsvcid": "4421", 00:38:24.637 "trtype": "TCP" 00:38:24.637 }, 00:38:24.637 "vs": { 00:38:24.637 "nvme_version": "1.3" 00:38:24.637 } 00:38:24.637 } 00:38:24.637 ] 00:38:24.637 }, 00:38:24.637 "memory_domains": [ 00:38:24.637 { 00:38:24.637 "dma_device_id": "system", 00:38:24.637 "dma_device_type": 1 00:38:24.637 } 00:38:24.637 ], 00:38:24.637 "name": "nvme0n1", 00:38:24.637 "num_blocks": 2097152, 00:38:24.637 "numa_id": -1, 00:38:24.637 "product_name": "NVMe disk", 00:38:24.637 "supported_io_types": { 00:38:24.637 "abort": true, 00:38:24.637 "compare": true, 00:38:24.637 "compare_and_write": true, 00:38:24.637 "copy": true, 00:38:24.637 "flush": true, 00:38:24.637 "get_zone_info": false, 00:38:24.637 "nvme_admin": true, 00:38:24.637 "nvme_io": true, 00:38:24.637 "nvme_io_md": false, 00:38:24.637 "nvme_iov_md": false, 00:38:24.637 "read": true, 00:38:24.637 "reset": true, 00:38:24.637 "seek_data": false, 00:38:24.637 "seek_hole": false, 00:38:24.637 "unmap": false, 00:38:24.637 "write": true, 00:38:24.637 "write_zeroes": true, 00:38:24.637 "zcopy": false, 00:38:24.637 "zone_append": false, 00:38:24.637 "zone_management": false 00:38:24.637 }, 00:38:24.637 "uuid": "19c106f1-de78-4894-b6cf-d0addda08c3d", 00:38:24.637 "zoned": false 00:38:24.637 } 00:38:24.637 ] 00:38:24.637 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.637 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:24.637 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.637 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:24.637 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.637 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.fKflbG89NG 00:38:24.637 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:38:24.637 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:38:24.637 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:24.637 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:38:24.897 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:24.897 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:38:24.897 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:24.897 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:24.897 rmmod nvme_tcp 00:38:24.897 rmmod nvme_fabrics 00:38:24.897 rmmod nvme_keyring 00:38:24.897 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:24.897 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:38:24.897 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:38:24.897 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@515 -- # '[' -n 86255 ']' 00:38:24.897 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # killprocess 86255 00:38:24.897 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 86255 ']' 00:38:24.897 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 86255 00:38:24.897 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:38:24.897 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:24.897 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86255 00:38:24.897 killing process with pid 86255 00:38:24.897 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:24.897 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:24.897 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86255' 00:38:24.897 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 86255 00:38:24.897 20:37:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 86255 00:38:25.156 20:37:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:25.156 20:37:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:25.156 20:37:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:25.156 20:37:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:38:25.156 20:37:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-save 00:38:25.156 20:37:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:25.156 20:37:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-restore 00:38:25.156 20:37:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:25.156 20:37:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:38:25.156 20:37:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:38:25.156 20:37:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:38:25.156 20:37:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:38:25.156 20:37:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:38:25.156 20:37:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:38:25.156 20:37:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:38:25.156 20:37:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:38:25.156 20:37:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:38:25.157 20:37:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:38:25.157 20:37:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:38:25.157 20:37:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:38:25.157 20:37:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:25.157 20:37:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:25.157 20:37:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@246 -- # remove_spdk_ns 00:38:25.157 20:37:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:25.157 20:37:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:25.157 20:37:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:25.157 20:37:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@300 -- # return 0 00:38:25.157 00:38:25.157 real 0m2.810s 00:38:25.157 user 0m2.457s 00:38:25.157 sys 0m0.660s 00:38:25.157 20:37:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:25.157 20:37:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:25.157 ************************************ 00:38:25.157 END TEST nvmf_async_init 00:38:25.157 ************************************ 00:38:25.417 20:37:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:38:25.417 20:37:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:38:25.417 20:37:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:25.417 20:37:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:25.417 ************************************ 00:38:25.417 START TEST dma 00:38:25.417 ************************************ 00:38:25.417 20:37:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:38:25.417 * Looking for test storage... 00:38:25.417 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:38:25.417 20:37:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:25.417 20:37:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lcov --version 00:38:25.417 20:37:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:25.417 20:37:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:25.417 20:37:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:25.417 20:37:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:25.417 20:37:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:25.417 20:37:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:38:25.417 20:37:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:38:25.417 20:37:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:38:25.417 20:37:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:38:25.417 20:37:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:38:25.417 20:37:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:38:25.417 20:37:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:38:25.417 20:37:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:25.417 20:37:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:38:25.417 20:37:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:38:25.417 20:37:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:25.417 20:37:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:25.417 20:37:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:38:25.417 20:37:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:38:25.417 20:37:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:25.417 20:37:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:38:25.417 20:37:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:25.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:25.418 --rc genhtml_branch_coverage=1 00:38:25.418 --rc genhtml_function_coverage=1 00:38:25.418 --rc genhtml_legend=1 00:38:25.418 --rc geninfo_all_blocks=1 00:38:25.418 --rc geninfo_unexecuted_blocks=1 00:38:25.418 00:38:25.418 ' 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:25.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:25.418 --rc genhtml_branch_coverage=1 00:38:25.418 --rc genhtml_function_coverage=1 00:38:25.418 --rc genhtml_legend=1 00:38:25.418 --rc geninfo_all_blocks=1 00:38:25.418 --rc geninfo_unexecuted_blocks=1 00:38:25.418 00:38:25.418 ' 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:25.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:25.418 --rc genhtml_branch_coverage=1 00:38:25.418 --rc genhtml_function_coverage=1 00:38:25.418 --rc genhtml_legend=1 00:38:25.418 --rc geninfo_all_blocks=1 00:38:25.418 --rc geninfo_unexecuted_blocks=1 00:38:25.418 00:38:25.418 ' 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:25.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:25.418 --rc genhtml_branch_coverage=1 00:38:25.418 --rc genhtml_function_coverage=1 00:38:25.418 --rc genhtml_legend=1 00:38:25.418 --rc geninfo_all_blocks=1 00:38:25.418 --rc geninfo_unexecuted_blocks=1 00:38:25.418 00:38:25.418 ' 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:25.418 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:25.418 20:37:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:25.419 20:37:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:25.419 20:37:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:25.419 20:37:20 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:38:25.419 20:37:20 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:38:25.419 ************************************ 00:38:25.419 END TEST dma 00:38:25.419 ************************************ 00:38:25.419 00:38:25.419 real 0m0.200s 00:38:25.419 user 0m0.135s 00:38:25.419 sys 0m0.076s 00:38:25.419 20:37:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:25.419 20:37:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:25.679 ************************************ 00:38:25.679 START TEST nvmf_identify 00:38:25.679 ************************************ 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:38:25.679 * Looking for test storage... 00:38:25.679 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:25.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:25.679 --rc genhtml_branch_coverage=1 00:38:25.679 --rc genhtml_function_coverage=1 00:38:25.679 --rc genhtml_legend=1 00:38:25.679 --rc geninfo_all_blocks=1 00:38:25.679 --rc geninfo_unexecuted_blocks=1 00:38:25.679 00:38:25.679 ' 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:25.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:25.679 --rc genhtml_branch_coverage=1 00:38:25.679 --rc genhtml_function_coverage=1 00:38:25.679 --rc genhtml_legend=1 00:38:25.679 --rc geninfo_all_blocks=1 00:38:25.679 --rc geninfo_unexecuted_blocks=1 00:38:25.679 00:38:25.679 ' 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:25.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:25.679 --rc genhtml_branch_coverage=1 00:38:25.679 --rc genhtml_function_coverage=1 00:38:25.679 --rc genhtml_legend=1 00:38:25.679 --rc geninfo_all_blocks=1 00:38:25.679 --rc geninfo_unexecuted_blocks=1 00:38:25.679 00:38:25.679 ' 00:38:25.679 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:25.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:25.680 --rc genhtml_branch_coverage=1 00:38:25.680 --rc genhtml_function_coverage=1 00:38:25.680 --rc genhtml_legend=1 00:38:25.680 --rc geninfo_all_blocks=1 00:38:25.680 --rc geninfo_unexecuted_blocks=1 00:38:25.680 00:38:25.680 ' 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:25.680 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@458 -- # nvmf_veth_init 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:38:25.680 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:38:25.681 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:38:25.681 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:38:25.681 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:25.681 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:38:25.681 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:38:25.681 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:38:25.681 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:38:25.681 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:38:25.681 Cannot find device "nvmf_init_br" 00:38:25.681 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:38:25.681 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:38:25.681 Cannot find device "nvmf_init_br2" 00:38:25.681 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:38:25.681 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:38:25.940 Cannot find device "nvmf_tgt_br" 00:38:25.940 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:38:25.940 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:38:25.940 Cannot find device "nvmf_tgt_br2" 00:38:25.940 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:38:25.940 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:38:25.940 Cannot find device "nvmf_init_br" 00:38:25.940 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:38:25.940 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:38:25.940 Cannot find device "nvmf_init_br2" 00:38:25.940 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:38:25.940 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:38:25.940 Cannot find device "nvmf_tgt_br" 00:38:25.940 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:38:25.940 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:38:25.940 Cannot find device "nvmf_tgt_br2" 00:38:25.940 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:38:25.940 20:37:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:38:25.941 Cannot find device "nvmf_br" 00:38:25.941 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:38:25.941 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:38:25.941 Cannot find device "nvmf_init_if" 00:38:25.941 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:38:25.941 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:38:25.941 Cannot find device "nvmf_init_if2" 00:38:25.941 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:38:25.941 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:25.941 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:25.941 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:38:25.941 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:25.941 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:25.941 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:38:25.941 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:38:25.941 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:38:25.941 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:38:25.941 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:38:25.941 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:38:25.941 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:38:25.941 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:38:25.941 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:38:25.941 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:38:25.941 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:38:25.941 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:38:25.941 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:38:25.941 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:38:25.941 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:38:25.941 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:38:25.941 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:38:25.941 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:38:25.941 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:38:25.941 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:38:26.200 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:38:26.200 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:38:26.200 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:38:26.200 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:38:26.200 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:38:26.200 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:38:26.200 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:38:26.200 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:38:26.200 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:38:26.200 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:38:26.200 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:38:26.200 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:38:26.200 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:38:26.200 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:38:26.200 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:38:26.200 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:38:26.200 00:38:26.200 --- 10.0.0.3 ping statistics --- 00:38:26.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:26.200 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:38:26.200 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:38:26.200 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:38:26.200 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:38:26.200 00:38:26.200 --- 10.0.0.4 ping statistics --- 00:38:26.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:26.200 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:38:26.200 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:38:26.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:26.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:38:26.200 00:38:26.200 --- 10.0.0.1 ping statistics --- 00:38:26.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:26.200 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:38:26.200 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:38:26.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:26.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:38:26.200 00:38:26.200 --- 10.0.0.2 ping statistics --- 00:38:26.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:26.200 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:38:26.200 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:26.200 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # return 0 00:38:26.200 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:26.200 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:26.200 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:26.200 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:26.200 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:26.200 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:26.200 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:26.200 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:38:26.200 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:26.200 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:38:26.200 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=86584 00:38:26.200 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:38:26.200 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:26.200 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 86584 00:38:26.200 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 86584 ']' 00:38:26.200 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:26.200 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:26.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:26.200 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:26.200 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:26.200 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:38:26.200 [2024-10-01 20:37:21.389055] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:38:26.200 [2024-10-01 20:37:21.389157] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:26.459 [2024-10-01 20:37:21.533198] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:26.459 [2024-10-01 20:37:21.608245] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:26.459 [2024-10-01 20:37:21.608308] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:26.459 [2024-10-01 20:37:21.608323] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:26.459 [2024-10-01 20:37:21.608333] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:26.459 [2024-10-01 20:37:21.608342] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:26.459 [2024-10-01 20:37:21.608516] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:26.459 [2024-10-01 20:37:21.608568] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:38:26.459 [2024-10-01 20:37:21.609253] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:38:26.459 [2024-10-01 20:37:21.609266] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:26.720 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:26.720 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:38:26.720 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:26.720 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:26.720 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:38:26.720 [2024-10-01 20:37:21.762620] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:26.720 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:26.720 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:38:26.720 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:26.720 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:38:26.720 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:26.720 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:26.720 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:38:26.720 Malloc0 00:38:26.720 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:26.720 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:26.720 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:26.720 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:38:26.720 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:26.720 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:38:26.720 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:26.720 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:38:26.720 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:26.720 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:38:26.720 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:26.720 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:38:26.720 [2024-10-01 20:37:21.855125] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:38:26.720 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:26.720 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:38:26.720 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:26.720 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:38:26.720 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:26.720 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:38:26.720 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:26.720 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:38:26.720 [ 00:38:26.720 { 00:38:26.720 "allow_any_host": true, 00:38:26.720 "hosts": [], 00:38:26.720 "listen_addresses": [ 00:38:26.720 { 00:38:26.720 "adrfam": "IPv4", 00:38:26.720 "traddr": "10.0.0.3", 00:38:26.720 "trsvcid": "4420", 00:38:26.720 "trtype": "TCP" 00:38:26.720 } 00:38:26.720 ], 00:38:26.720 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:38:26.720 "subtype": "Discovery" 00:38:26.720 }, 00:38:26.720 { 00:38:26.720 "allow_any_host": true, 00:38:26.720 "hosts": [], 00:38:26.720 "listen_addresses": [ 00:38:26.720 { 00:38:26.720 "adrfam": "IPv4", 00:38:26.720 "traddr": "10.0.0.3", 00:38:26.720 "trsvcid": "4420", 00:38:26.720 "trtype": "TCP" 00:38:26.720 } 00:38:26.720 ], 00:38:26.720 "max_cntlid": 65519, 00:38:26.720 "max_namespaces": 32, 00:38:26.720 "min_cntlid": 1, 00:38:26.720 "model_number": "SPDK bdev Controller", 00:38:26.720 "namespaces": [ 00:38:26.720 { 00:38:26.720 "bdev_name": "Malloc0", 00:38:26.720 "eui64": "ABCDEF0123456789", 00:38:26.720 "name": "Malloc0", 00:38:26.720 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:38:26.720 "nsid": 1, 00:38:26.720 "uuid": "9d1f7d80-7b93-480a-bd2b-645c232041a1" 00:38:26.720 } 00:38:26.720 ], 00:38:26.720 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:26.720 "serial_number": "SPDK00000000000001", 00:38:26.720 "subtype": "NVMe" 00:38:26.720 } 00:38:26.720 ] 00:38:26.720 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:26.720 20:37:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:38:26.720 [2024-10-01 20:37:21.904899] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:38:26.720 [2024-10-01 20:37:21.904952] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86629 ] 00:38:26.983 [2024-10-01 20:37:22.041878] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:38:26.983 [2024-10-01 20:37:22.041945] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:38:26.983 [2024-10-01 20:37:22.041952] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:38:26.983 [2024-10-01 20:37:22.041962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:38:26.983 [2024-10-01 20:37:22.041970] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:38:26.983 [2024-10-01 20:37:22.042284] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:38:26.983 [2024-10-01 20:37:22.042349] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x13b98f0 0 00:38:26.983 [2024-10-01 20:37:22.048194] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:38:26.983 [2024-10-01 20:37:22.048218] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:38:26.983 [2024-10-01 20:37:22.048240] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:38:26.983 [2024-10-01 20:37:22.048244] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:38:26.983 [2024-10-01 20:37:22.048275] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.983 [2024-10-01 20:37:22.048281] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.983 [2024-10-01 20:37:22.048286] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13b98f0) 00:38:26.983 [2024-10-01 20:37:22.048299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:38:26.983 [2024-10-01 20:37:22.048328] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0000, cid 0, qid 0 00:38:26.983 [2024-10-01 20:37:22.056189] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.983 [2024-10-01 20:37:22.056209] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.983 [2024-10-01 20:37:22.056230] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.983 [2024-10-01 20:37:22.056235] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0000) on tqpair=0x13b98f0 00:38:26.983 [2024-10-01 20:37:22.056244] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:38:26.983 [2024-10-01 20:37:22.056252] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:38:26.983 [2024-10-01 20:37:22.056258] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:38:26.983 [2024-10-01 20:37:22.056274] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.983 [2024-10-01 20:37:22.056279] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.983 [2024-10-01 20:37:22.056283] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13b98f0) 00:38:26.983 [2024-10-01 20:37:22.056292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.983 [2024-10-01 20:37:22.056319] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0000, cid 0, qid 0 00:38:26.983 [2024-10-01 20:37:22.056399] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.983 [2024-10-01 20:37:22.056406] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.983 [2024-10-01 20:37:22.056409] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.983 [2024-10-01 20:37:22.056413] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0000) on tqpair=0x13b98f0 00:38:26.983 [2024-10-01 20:37:22.056419] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:38:26.983 [2024-10-01 20:37:22.056442] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:38:26.983 [2024-10-01 20:37:22.056449] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.983 [2024-10-01 20:37:22.056470] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.983 [2024-10-01 20:37:22.056474] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13b98f0) 00:38:26.983 [2024-10-01 20:37:22.056481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.983 [2024-10-01 20:37:22.056500] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0000, cid 0, qid 0 00:38:26.983 [2024-10-01 20:37:22.056553] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.983 [2024-10-01 20:37:22.056560] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.983 [2024-10-01 20:37:22.056563] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.983 [2024-10-01 20:37:22.056568] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0000) on tqpair=0x13b98f0 00:38:26.983 [2024-10-01 20:37:22.056574] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:38:26.983 [2024-10-01 20:37:22.056582] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:38:26.983 [2024-10-01 20:37:22.056590] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.984 [2024-10-01 20:37:22.056594] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.984 [2024-10-01 20:37:22.056598] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13b98f0) 00:38:26.984 [2024-10-01 20:37:22.056605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.984 [2024-10-01 20:37:22.056623] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0000, cid 0, qid 0 00:38:26.984 [2024-10-01 20:37:22.056678] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.984 [2024-10-01 20:37:22.056684] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.984 [2024-10-01 20:37:22.056688] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.984 [2024-10-01 20:37:22.056692] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0000) on tqpair=0x13b98f0 00:38:26.984 [2024-10-01 20:37:22.056698] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:38:26.984 [2024-10-01 20:37:22.056708] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.984 [2024-10-01 20:37:22.056713] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.984 [2024-10-01 20:37:22.056717] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13b98f0) 00:38:26.984 [2024-10-01 20:37:22.056724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.984 [2024-10-01 20:37:22.056741] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0000, cid 0, qid 0 00:38:26.984 [2024-10-01 20:37:22.056795] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.984 [2024-10-01 20:37:22.056802] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.984 [2024-10-01 20:37:22.056805] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.984 [2024-10-01 20:37:22.056809] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0000) on tqpair=0x13b98f0 00:38:26.984 [2024-10-01 20:37:22.056814] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:38:26.984 [2024-10-01 20:37:22.056820] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:38:26.984 [2024-10-01 20:37:22.056828] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:38:26.984 [2024-10-01 20:37:22.056934] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:38:26.984 [2024-10-01 20:37:22.056939] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:38:26.984 [2024-10-01 20:37:22.056948] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.984 [2024-10-01 20:37:22.056953] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.984 [2024-10-01 20:37:22.056956] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13b98f0) 00:38:26.984 [2024-10-01 20:37:22.056964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.984 [2024-10-01 20:37:22.056982] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0000, cid 0, qid 0 00:38:26.984 [2024-10-01 20:37:22.057035] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.984 [2024-10-01 20:37:22.057042] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.984 [2024-10-01 20:37:22.057046] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.984 [2024-10-01 20:37:22.057050] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0000) on tqpair=0x13b98f0 00:38:26.984 [2024-10-01 20:37:22.057055] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:38:26.984 [2024-10-01 20:37:22.057065] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.984 [2024-10-01 20:37:22.057070] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.984 [2024-10-01 20:37:22.057074] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13b98f0) 00:38:26.984 [2024-10-01 20:37:22.057081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.984 [2024-10-01 20:37:22.057099] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0000, cid 0, qid 0 00:38:26.984 [2024-10-01 20:37:22.057148] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.984 [2024-10-01 20:37:22.057168] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.984 [2024-10-01 20:37:22.057174] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.984 [2024-10-01 20:37:22.057178] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0000) on tqpair=0x13b98f0 00:38:26.984 [2024-10-01 20:37:22.057183] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:38:26.984 [2024-10-01 20:37:22.057189] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:38:26.984 [2024-10-01 20:37:22.057198] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:38:26.984 [2024-10-01 20:37:22.057213] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:38:26.984 [2024-10-01 20:37:22.057224] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.984 [2024-10-01 20:37:22.057228] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13b98f0) 00:38:26.984 [2024-10-01 20:37:22.057236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.984 [2024-10-01 20:37:22.057258] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0000, cid 0, qid 0 00:38:26.984 [2024-10-01 20:37:22.057352] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:38:26.984 [2024-10-01 20:37:22.057359] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:38:26.984 [2024-10-01 20:37:22.057364] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:38:26.984 [2024-10-01 20:37:22.057368] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13b98f0): datao=0, datal=4096, cccid=0 00:38:26.984 [2024-10-01 20:37:22.057373] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13e0000) on tqpair(0x13b98f0): expected_datao=0, payload_size=4096 00:38:26.984 [2024-10-01 20:37:22.057378] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.984 [2024-10-01 20:37:22.057386] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:38:26.984 [2024-10-01 20:37:22.057390] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:38:26.984 [2024-10-01 20:37:22.057398] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.984 [2024-10-01 20:37:22.057404] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.984 [2024-10-01 20:37:22.057408] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.984 [2024-10-01 20:37:22.057412] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0000) on tqpair=0x13b98f0 00:38:26.984 [2024-10-01 20:37:22.057420] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:38:26.984 [2024-10-01 20:37:22.057426] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:38:26.984 [2024-10-01 20:37:22.057431] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:38:26.984 [2024-10-01 20:37:22.057436] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:38:26.984 [2024-10-01 20:37:22.057441] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:38:26.984 [2024-10-01 20:37:22.057446] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:38:26.984 [2024-10-01 20:37:22.057454] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:38:26.984 [2024-10-01 20:37:22.057466] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.984 [2024-10-01 20:37:22.057472] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.984 [2024-10-01 20:37:22.057476] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13b98f0) 00:38:26.984 [2024-10-01 20:37:22.057483] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:38:26.984 [2024-10-01 20:37:22.057505] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0000, cid 0, qid 0 00:38:26.984 [2024-10-01 20:37:22.057567] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.984 [2024-10-01 20:37:22.057574] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.984 [2024-10-01 20:37:22.057578] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.984 [2024-10-01 20:37:22.057582] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0000) on tqpair=0x13b98f0 00:38:26.984 [2024-10-01 20:37:22.057590] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.984 [2024-10-01 20:37:22.057594] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.984 [2024-10-01 20:37:22.057598] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13b98f0) 00:38:26.984 [2024-10-01 20:37:22.057605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:38:26.985 [2024-10-01 20:37:22.057611] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.985 [2024-10-01 20:37:22.057615] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.985 [2024-10-01 20:37:22.057619] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x13b98f0) 00:38:26.985 [2024-10-01 20:37:22.057625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:38:26.985 [2024-10-01 20:37:22.057631] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.985 [2024-10-01 20:37:22.057634] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.985 [2024-10-01 20:37:22.057638] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x13b98f0) 00:38:26.985 [2024-10-01 20:37:22.057644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:38:26.985 [2024-10-01 20:37:22.057650] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.985 [2024-10-01 20:37:22.057654] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.985 [2024-10-01 20:37:22.057658] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b98f0) 00:38:26.985 [2024-10-01 20:37:22.057663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:38:26.985 [2024-10-01 20:37:22.057668] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:38:26.985 [2024-10-01 20:37:22.057681] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:38:26.985 [2024-10-01 20:37:22.057688] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.985 [2024-10-01 20:37:22.057692] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13b98f0) 00:38:26.985 [2024-10-01 20:37:22.057699] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.985 [2024-10-01 20:37:22.057720] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0000, cid 0, qid 0 00:38:26.985 [2024-10-01 20:37:22.057727] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0180, cid 1, qid 0 00:38:26.985 [2024-10-01 20:37:22.057732] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0300, cid 2, qid 0 00:38:26.985 [2024-10-01 20:37:22.057737] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0480, cid 3, qid 0 00:38:26.985 [2024-10-01 20:37:22.057742] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0600, cid 4, qid 0 00:38:26.985 [2024-10-01 20:37:22.057827] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.985 [2024-10-01 20:37:22.057833] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.985 [2024-10-01 20:37:22.057837] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.985 [2024-10-01 20:37:22.057841] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0600) on tqpair=0x13b98f0 00:38:26.985 [2024-10-01 20:37:22.057846] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:38:26.985 [2024-10-01 20:37:22.057852] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:38:26.985 [2024-10-01 20:37:22.057864] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.985 [2024-10-01 20:37:22.057869] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13b98f0) 00:38:26.985 [2024-10-01 20:37:22.057876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.985 [2024-10-01 20:37:22.057894] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0600, cid 4, qid 0 00:38:26.985 [2024-10-01 20:37:22.057961] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:38:26.985 [2024-10-01 20:37:22.057968] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:38:26.985 [2024-10-01 20:37:22.057971] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:38:26.985 [2024-10-01 20:37:22.057975] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13b98f0): datao=0, datal=4096, cccid=4 00:38:26.985 [2024-10-01 20:37:22.057980] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13e0600) on tqpair(0x13b98f0): expected_datao=0, payload_size=4096 00:38:26.985 [2024-10-01 20:37:22.057984] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.985 [2024-10-01 20:37:22.057992] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:38:26.985 [2024-10-01 20:37:22.057996] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:38:26.985 [2024-10-01 20:37:22.058003] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.985 [2024-10-01 20:37:22.058009] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.985 [2024-10-01 20:37:22.058013] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.985 [2024-10-01 20:37:22.058017] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0600) on tqpair=0x13b98f0 00:38:26.985 [2024-10-01 20:37:22.058030] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:38:26.985 [2024-10-01 20:37:22.058057] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.985 [2024-10-01 20:37:22.058062] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13b98f0) 00:38:26.985 [2024-10-01 20:37:22.058070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.985 [2024-10-01 20:37:22.058078] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.985 [2024-10-01 20:37:22.058082] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.985 [2024-10-01 20:37:22.058085] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13b98f0) 00:38:26.985 [2024-10-01 20:37:22.058091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:38:26.985 [2024-10-01 20:37:22.058127] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0600, cid 4, qid 0 00:38:26.985 [2024-10-01 20:37:22.058136] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0780, cid 5, qid 0 00:38:26.985 [2024-10-01 20:37:22.058244] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:38:26.985 [2024-10-01 20:37:22.058251] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:38:26.985 [2024-10-01 20:37:22.058255] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:38:26.985 [2024-10-01 20:37:22.058259] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13b98f0): datao=0, datal=1024, cccid=4 00:38:26.985 [2024-10-01 20:37:22.058263] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13e0600) on tqpair(0x13b98f0): expected_datao=0, payload_size=1024 00:38:26.985 [2024-10-01 20:37:22.058268] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.985 [2024-10-01 20:37:22.058275] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:38:26.985 [2024-10-01 20:37:22.058279] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:38:26.985 [2024-10-01 20:37:22.058284] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.985 [2024-10-01 20:37:22.058290] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.985 [2024-10-01 20:37:22.058294] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.985 [2024-10-01 20:37:22.058298] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0780) on tqpair=0x13b98f0 00:38:26.985 [2024-10-01 20:37:22.099226] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.985 [2024-10-01 20:37:22.099247] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.985 [2024-10-01 20:37:22.099268] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.985 [2024-10-01 20:37:22.099273] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0600) on tqpair=0x13b98f0 00:38:26.985 [2024-10-01 20:37:22.099292] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.985 [2024-10-01 20:37:22.099298] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13b98f0) 00:38:26.985 [2024-10-01 20:37:22.099307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.985 [2024-10-01 20:37:22.099338] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0600, cid 4, qid 0 00:38:26.985 [2024-10-01 20:37:22.099421] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:38:26.985 [2024-10-01 20:37:22.099428] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:38:26.985 [2024-10-01 20:37:22.099431] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:38:26.985 [2024-10-01 20:37:22.099435] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13b98f0): datao=0, datal=3072, cccid=4 00:38:26.985 [2024-10-01 20:37:22.099439] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13e0600) on tqpair(0x13b98f0): expected_datao=0, payload_size=3072 00:38:26.985 [2024-10-01 20:37:22.099444] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.985 [2024-10-01 20:37:22.099451] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:38:26.985 [2024-10-01 20:37:22.099455] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:38:26.985 [2024-10-01 20:37:22.099480] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.985 [2024-10-01 20:37:22.099486] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.985 [2024-10-01 20:37:22.099505] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.985 [2024-10-01 20:37:22.099510] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0600) on tqpair=0x13b98f0 00:38:26.985 [2024-10-01 20:37:22.099535] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.985 [2024-10-01 20:37:22.099539] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13b98f0) 00:38:26.986 [2024-10-01 20:37:22.099547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.986 [2024-10-01 20:37:22.099572] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0600, cid 4, qid 0 00:38:26.986 [2024-10-01 20:37:22.099646] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:38:26.986 [2024-10-01 20:37:22.099653] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:38:26.986 [2024-10-01 20:37:22.099657] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:38:26.986 [2024-10-01 20:37:22.099660] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13b98f0): datao=0, datal=8, cccid=4 00:38:26.986 [2024-10-01 20:37:22.099665] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13e0600) on tqpair(0x13b98f0): expected_datao=0, payload_size=8 00:38:26.986 [2024-10-01 20:37:22.099670] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.986 [2024-10-01 20:37:22.099676] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:38:26.986 [2024-10-01 20:37:22.099680] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:38:26.986 ===================================================== 00:38:26.986 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:38:26.986 ===================================================== 00:38:26.986 Controller Capabilities/Features 00:38:26.986 ================================ 00:38:26.986 Vendor ID: 0000 00:38:26.986 Subsystem Vendor ID: 0000 00:38:26.986 Serial Number: .................... 00:38:26.986 Model Number: ........................................ 00:38:26.986 Firmware Version: 25.01 00:38:26.986 Recommended Arb Burst: 0 00:38:26.986 IEEE OUI Identifier: 00 00 00 00:38:26.986 Multi-path I/O 00:38:26.986 May have multiple subsystem ports: No 00:38:26.986 May have multiple controllers: No 00:38:26.986 Associated with SR-IOV VF: No 00:38:26.986 Max Data Transfer Size: 131072 00:38:26.986 Max Number of Namespaces: 0 00:38:26.986 Max Number of I/O Queues: 1024 00:38:26.986 NVMe Specification Version (VS): 1.3 00:38:26.986 NVMe Specification Version (Identify): 1.3 00:38:26.986 Maximum Queue Entries: 128 00:38:26.986 Contiguous Queues Required: Yes 00:38:26.986 Arbitration Mechanisms Supported 00:38:26.986 Weighted Round Robin: Not Supported 00:38:26.986 Vendor Specific: Not Supported 00:38:26.986 Reset Timeout: 15000 ms 00:38:26.986 Doorbell Stride: 4 bytes 00:38:26.986 NVM Subsystem Reset: Not Supported 00:38:26.986 Command Sets Supported 00:38:26.986 NVM Command Set: Supported 00:38:26.986 Boot Partition: Not Supported 00:38:26.986 Memory Page Size Minimum: 4096 bytes 00:38:26.986 Memory Page Size Maximum: 4096 bytes 00:38:26.986 Persistent Memory Region: Not Supported 00:38:26.986 Optional Asynchronous Events Supported 00:38:26.986 Namespace Attribute Notices: Not Supported 00:38:26.986 Firmware Activation Notices: Not Supported 00:38:26.986 ANA Change Notices: Not Supported 00:38:26.986 PLE Aggregate Log Change Notices: Not Supported 00:38:26.986 LBA Status Info Alert Notices: Not Supported 00:38:26.986 EGE Aggregate Log Change Notices: Not Supported 00:38:26.986 Normal NVM Subsystem Shutdown event: Not Supported 00:38:26.986 Zone Descriptor Change Notices: Not Supported 00:38:26.986 Discovery Log Change Notices: Supported 00:38:26.986 Controller Attributes 00:38:26.986 128-bit Host Identifier: Not Supported 00:38:26.986 Non-Operational Permissive Mode: Not Supported 00:38:26.986 NVM Sets: Not Supported 00:38:26.986 Read Recovery Levels: Not Supported 00:38:26.986 Endurance Groups: Not Supported 00:38:26.986 Predictable Latency Mode: Not Supported 00:38:26.986 Traffic Based Keep ALive: Not Supported 00:38:26.986 Namespace Granularity: Not Supported 00:38:26.986 SQ Associations: Not Supported 00:38:26.986 UUID List: Not Supported 00:38:26.986 Multi-Domain Subsystem: Not Supported 00:38:26.986 Fixed Capacity Management: Not Supported 00:38:26.986 Variable Capacity Management: Not Supported 00:38:26.986 Delete Endurance Group: Not Supported 00:38:26.986 Delete NVM Set: Not Supported 00:38:26.986 Extended LBA Formats Supported: Not Supported 00:38:26.986 Flexible Data Placement Supported: Not Supported 00:38:26.986 00:38:26.986 Controller Memory Buffer Support 00:38:26.986 ================================ 00:38:26.986 Supported: No 00:38:26.986 00:38:26.986 Persistent Memory Region Support 00:38:26.986 ================================ 00:38:26.986 Supported: No 00:38:26.986 00:38:26.986 Admin Command Set Attributes 00:38:26.986 ============================ 00:38:26.986 Security Send/Receive: Not Supported 00:38:26.986 Format NVM: Not Supported 00:38:26.986 Firmware Activate/Download: Not Supported 00:38:26.986 Namespace Management: Not Supported 00:38:26.986 Device Self-Test: Not Supported 00:38:26.986 Directives: Not Supported 00:38:26.986 NVMe-MI: Not Supported 00:38:26.986 Virtualization Management: Not Supported 00:38:26.986 Doorbell Buffer Config: Not Supported 00:38:26.986 Get LBA Status Capability: Not Supported 00:38:26.986 Command & Feature Lockdown Capability: Not Supported 00:38:26.986 Abort Command Limit: 1 00:38:26.986 Async Event Request Limit: 4 00:38:26.986 Number of Firmware Slots: N/A 00:38:26.986 Firmware Slot 1 Read-Only: N/A 00:38:26.986 Firm[2024-10-01 20:37:22.143135] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.986 [2024-10-01 20:37:22.143155] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.986 [2024-10-01 20:37:22.143176] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.986 [2024-10-01 20:37:22.143181] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0600) on tqpair=0x13b98f0 00:38:26.986 ware Activation Without Reset: N/A 00:38:26.986 Multiple Update Detection Support: N/A 00:38:26.986 Firmware Update Granularity: No Information Provided 00:38:26.986 Per-Namespace SMART Log: No 00:38:26.986 Asymmetric Namespace Access Log Page: Not Supported 00:38:26.986 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:38:26.986 Command Effects Log Page: Not Supported 00:38:26.986 Get Log Page Extended Data: Supported 00:38:26.986 Telemetry Log Pages: Not Supported 00:38:26.986 Persistent Event Log Pages: Not Supported 00:38:26.986 Supported Log Pages Log Page: May Support 00:38:26.986 Commands Supported & Effects Log Page: Not Supported 00:38:26.986 Feature Identifiers & Effects Log Page:May Support 00:38:26.986 NVMe-MI Commands & Effects Log Page: May Support 00:38:26.986 Data Area 4 for Telemetry Log: Not Supported 00:38:26.986 Error Log Page Entries Supported: 128 00:38:26.986 Keep Alive: Not Supported 00:38:26.986 00:38:26.986 NVM Command Set Attributes 00:38:26.986 ========================== 00:38:26.986 Submission Queue Entry Size 00:38:26.986 Max: 1 00:38:26.986 Min: 1 00:38:26.986 Completion Queue Entry Size 00:38:26.986 Max: 1 00:38:26.986 Min: 1 00:38:26.986 Number of Namespaces: 0 00:38:26.986 Compare Command: Not Supported 00:38:26.986 Write Uncorrectable Command: Not Supported 00:38:26.986 Dataset Management Command: Not Supported 00:38:26.986 Write Zeroes Command: Not Supported 00:38:26.986 Set Features Save Field: Not Supported 00:38:26.986 Reservations: Not Supported 00:38:26.986 Timestamp: Not Supported 00:38:26.986 Copy: Not Supported 00:38:26.986 Volatile Write Cache: Not Present 00:38:26.986 Atomic Write Unit (Normal): 1 00:38:26.986 Atomic Write Unit (PFail): 1 00:38:26.986 Atomic Compare & Write Unit: 1 00:38:26.986 Fused Compare & Write: Supported 00:38:26.986 Scatter-Gather List 00:38:26.986 SGL Command Set: Supported 00:38:26.986 SGL Keyed: Supported 00:38:26.986 SGL Bit Bucket Descriptor: Not Supported 00:38:26.986 SGL Metadata Pointer: Not Supported 00:38:26.986 Oversized SGL: Not Supported 00:38:26.986 SGL Metadata Address: Not Supported 00:38:26.986 SGL Offset: Supported 00:38:26.986 Transport SGL Data Block: Not Supported 00:38:26.986 Replay Protected Memory Block: Not Supported 00:38:26.986 00:38:26.986 Firmware Slot Information 00:38:26.986 ========================= 00:38:26.986 Active slot: 0 00:38:26.986 00:38:26.986 00:38:26.986 Error Log 00:38:26.986 ========= 00:38:26.986 00:38:26.986 Active Namespaces 00:38:26.986 ================= 00:38:26.986 Discovery Log Page 00:38:26.986 ================== 00:38:26.986 Generation Counter: 2 00:38:26.986 Number of Records: 2 00:38:26.986 Record Format: 0 00:38:26.986 00:38:26.987 Discovery Log Entry 0 00:38:26.987 ---------------------- 00:38:26.987 Transport Type: 3 (TCP) 00:38:26.987 Address Family: 1 (IPv4) 00:38:26.987 Subsystem Type: 3 (Current Discovery Subsystem) 00:38:26.987 Entry Flags: 00:38:26.987 Duplicate Returned Information: 1 00:38:26.987 Explicit Persistent Connection Support for Discovery: 1 00:38:26.987 Transport Requirements: 00:38:26.987 Secure Channel: Not Required 00:38:26.987 Port ID: 0 (0x0000) 00:38:26.987 Controller ID: 65535 (0xffff) 00:38:26.987 Admin Max SQ Size: 128 00:38:26.987 Transport Service Identifier: 4420 00:38:26.987 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:38:26.987 Transport Address: 10.0.0.3 00:38:26.987 Discovery Log Entry 1 00:38:26.987 ---------------------- 00:38:26.987 Transport Type: 3 (TCP) 00:38:26.987 Address Family: 1 (IPv4) 00:38:26.987 Subsystem Type: 2 (NVM Subsystem) 00:38:26.987 Entry Flags: 00:38:26.987 Duplicate Returned Information: 0 00:38:26.987 Explicit Persistent Connection Support for Discovery: 0 00:38:26.987 Transport Requirements: 00:38:26.987 Secure Channel: Not Required 00:38:26.987 Port ID: 0 (0x0000) 00:38:26.987 Controller ID: 65535 (0xffff) 00:38:26.987 Admin Max SQ Size: 128 00:38:26.987 Transport Service Identifier: 4420 00:38:26.987 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:38:26.987 Transport Address: 10.0.0.3 [2024-10-01 20:37:22.143267] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:38:26.987 [2024-10-01 20:37:22.143280] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0000) on tqpair=0x13b98f0 00:38:26.987 [2024-10-01 20:37:22.143287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.987 [2024-10-01 20:37:22.143293] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0180) on tqpair=0x13b98f0 00:38:26.987 [2024-10-01 20:37:22.143297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.987 [2024-10-01 20:37:22.143302] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0300) on tqpair=0x13b98f0 00:38:26.987 [2024-10-01 20:37:22.143306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.987 [2024-10-01 20:37:22.143311] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0480) on tqpair=0x13b98f0 00:38:26.987 [2024-10-01 20:37:22.143315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:26.987 [2024-10-01 20:37:22.143325] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.987 [2024-10-01 20:37:22.143329] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.987 [2024-10-01 20:37:22.143333] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b98f0) 00:38:26.987 [2024-10-01 20:37:22.143341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.987 [2024-10-01 20:37:22.143366] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0480, cid 3, qid 0 00:38:26.987 [2024-10-01 20:37:22.143439] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.987 [2024-10-01 20:37:22.143446] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.987 [2024-10-01 20:37:22.143449] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.987 [2024-10-01 20:37:22.143453] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0480) on tqpair=0x13b98f0 00:38:26.987 [2024-10-01 20:37:22.143462] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.987 [2024-10-01 20:37:22.143466] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.987 [2024-10-01 20:37:22.143469] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b98f0) 00:38:26.987 [2024-10-01 20:37:22.143476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.987 [2024-10-01 20:37:22.143515] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0480, cid 3, qid 0 00:38:26.987 [2024-10-01 20:37:22.143583] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.987 [2024-10-01 20:37:22.143590] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.987 [2024-10-01 20:37:22.143593] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.987 [2024-10-01 20:37:22.143597] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0480) on tqpair=0x13b98f0 00:38:26.987 [2024-10-01 20:37:22.143602] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:38:26.987 [2024-10-01 20:37:22.143607] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:38:26.987 [2024-10-01 20:37:22.143617] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.987 [2024-10-01 20:37:22.143622] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.987 [2024-10-01 20:37:22.143625] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b98f0) 00:38:26.987 [2024-10-01 20:37:22.143633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.987 [2024-10-01 20:37:22.143650] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0480, cid 3, qid 0 00:38:26.987 [2024-10-01 20:37:22.143702] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.987 [2024-10-01 20:37:22.143708] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.987 [2024-10-01 20:37:22.143712] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.987 [2024-10-01 20:37:22.143716] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0480) on tqpair=0x13b98f0 00:38:26.987 [2024-10-01 20:37:22.143753] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.987 [2024-10-01 20:37:22.143774] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.987 [2024-10-01 20:37:22.143778] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b98f0) 00:38:26.987 [2024-10-01 20:37:22.143786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.987 [2024-10-01 20:37:22.143806] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0480, cid 3, qid 0 00:38:26.987 [2024-10-01 20:37:22.143861] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.987 [2024-10-01 20:37:22.143868] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.987 [2024-10-01 20:37:22.143872] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.987 [2024-10-01 20:37:22.143876] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0480) on tqpair=0x13b98f0 00:38:26.987 [2024-10-01 20:37:22.143887] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.987 [2024-10-01 20:37:22.143892] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.987 [2024-10-01 20:37:22.143896] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b98f0) 00:38:26.987 [2024-10-01 20:37:22.143904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.987 [2024-10-01 20:37:22.143922] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0480, cid 3, qid 0 00:38:26.987 [2024-10-01 20:37:22.143977] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.987 [2024-10-01 20:37:22.143984] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.987 [2024-10-01 20:37:22.143988] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.987 [2024-10-01 20:37:22.143992] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0480) on tqpair=0x13b98f0 00:38:26.987 [2024-10-01 20:37:22.144003] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.987 [2024-10-01 20:37:22.144007] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.987 [2024-10-01 20:37:22.144011] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b98f0) 00:38:26.987 [2024-10-01 20:37:22.144019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.987 [2024-10-01 20:37:22.144037] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0480, cid 3, qid 0 00:38:26.987 [2024-10-01 20:37:22.144103] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.987 [2024-10-01 20:37:22.144125] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.987 [2024-10-01 20:37:22.144128] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.987 [2024-10-01 20:37:22.144144] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0480) on tqpair=0x13b98f0 00:38:26.987 [2024-10-01 20:37:22.144155] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.987 [2024-10-01 20:37:22.144159] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.987 [2024-10-01 20:37:22.144163] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b98f0) 00:38:26.987 [2024-10-01 20:37:22.144171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.987 [2024-10-01 20:37:22.144227] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0480, cid 3, qid 0 00:38:26.988 [2024-10-01 20:37:22.144283] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.988 [2024-10-01 20:37:22.144291] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.988 [2024-10-01 20:37:22.144295] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.988 [2024-10-01 20:37:22.144299] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0480) on tqpair=0x13b98f0 00:38:26.988 [2024-10-01 20:37:22.144310] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.988 [2024-10-01 20:37:22.144315] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.988 [2024-10-01 20:37:22.144319] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b98f0) 00:38:26.988 [2024-10-01 20:37:22.144327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.988 [2024-10-01 20:37:22.144346] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0480, cid 3, qid 0 00:38:26.988 [2024-10-01 20:37:22.144398] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.988 [2024-10-01 20:37:22.144405] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.988 [2024-10-01 20:37:22.144409] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.988 [2024-10-01 20:37:22.144413] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0480) on tqpair=0x13b98f0 00:38:26.988 [2024-10-01 20:37:22.144424] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.988 [2024-10-01 20:37:22.144428] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.988 [2024-10-01 20:37:22.144432] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b98f0) 00:38:26.988 [2024-10-01 20:37:22.144440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.988 [2024-10-01 20:37:22.144458] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0480, cid 3, qid 0 00:38:26.988 [2024-10-01 20:37:22.144510] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.988 [2024-10-01 20:37:22.144532] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.988 [2024-10-01 20:37:22.144536] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.988 [2024-10-01 20:37:22.144540] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0480) on tqpair=0x13b98f0 00:38:26.988 [2024-10-01 20:37:22.144564] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.988 [2024-10-01 20:37:22.144569] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.988 [2024-10-01 20:37:22.144573] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b98f0) 00:38:26.988 [2024-10-01 20:37:22.144581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.988 [2024-10-01 20:37:22.144598] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0480, cid 3, qid 0 00:38:26.988 [2024-10-01 20:37:22.144647] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.988 [2024-10-01 20:37:22.144654] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.988 [2024-10-01 20:37:22.144657] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.988 [2024-10-01 20:37:22.144662] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0480) on tqpair=0x13b98f0 00:38:26.988 [2024-10-01 20:37:22.144671] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.988 [2024-10-01 20:37:22.144676] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.988 [2024-10-01 20:37:22.144680] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b98f0) 00:38:26.988 [2024-10-01 20:37:22.144687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.988 [2024-10-01 20:37:22.144704] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0480, cid 3, qid 0 00:38:26.988 [2024-10-01 20:37:22.144752] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.988 [2024-10-01 20:37:22.144759] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.988 [2024-10-01 20:37:22.144763] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.988 [2024-10-01 20:37:22.144767] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0480) on tqpair=0x13b98f0 00:38:26.988 [2024-10-01 20:37:22.144777] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.988 [2024-10-01 20:37:22.144781] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.988 [2024-10-01 20:37:22.144785] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b98f0) 00:38:26.988 [2024-10-01 20:37:22.144792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.988 [2024-10-01 20:37:22.144809] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0480, cid 3, qid 0 00:38:26.988 [2024-10-01 20:37:22.144865] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.988 [2024-10-01 20:37:22.144871] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.988 [2024-10-01 20:37:22.144875] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.988 [2024-10-01 20:37:22.144879] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0480) on tqpair=0x13b98f0 00:38:26.988 [2024-10-01 20:37:22.144889] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.988 [2024-10-01 20:37:22.144893] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.988 [2024-10-01 20:37:22.144897] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b98f0) 00:38:26.988 [2024-10-01 20:37:22.144904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.988 [2024-10-01 20:37:22.144921] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0480, cid 3, qid 0 00:38:26.988 [2024-10-01 20:37:22.144972] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.988 [2024-10-01 20:37:22.144979] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.988 [2024-10-01 20:37:22.144982] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.988 [2024-10-01 20:37:22.144986] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0480) on tqpair=0x13b98f0 00:38:26.988 [2024-10-01 20:37:22.144996] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.988 [2024-10-01 20:37:22.145001] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.988 [2024-10-01 20:37:22.145004] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b98f0) 00:38:26.988 [2024-10-01 20:37:22.145012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.988 [2024-10-01 20:37:22.145028] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0480, cid 3, qid 0 00:38:26.988 [2024-10-01 20:37:22.145081] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.988 [2024-10-01 20:37:22.145093] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.988 [2024-10-01 20:37:22.145097] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.988 [2024-10-01 20:37:22.145101] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0480) on tqpair=0x13b98f0 00:38:26.988 [2024-10-01 20:37:22.145121] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.988 [2024-10-01 20:37:22.145127] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.988 [2024-10-01 20:37:22.145131] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b98f0) 00:38:26.988 [2024-10-01 20:37:22.145138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.988 [2024-10-01 20:37:22.145174] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0480, cid 3, qid 0 00:38:26.988 [2024-10-01 20:37:22.145236] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.988 [2024-10-01 20:37:22.145243] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.988 [2024-10-01 20:37:22.145247] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.988 [2024-10-01 20:37:22.145251] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0480) on tqpair=0x13b98f0 00:38:26.988 [2024-10-01 20:37:22.145262] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.988 [2024-10-01 20:37:22.145266] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.988 [2024-10-01 20:37:22.145270] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b98f0) 00:38:26.989 [2024-10-01 20:37:22.145277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.989 [2024-10-01 20:37:22.145295] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0480, cid 3, qid 0 00:38:26.989 [2024-10-01 20:37:22.145345] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.989 [2024-10-01 20:37:22.145352] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.989 [2024-10-01 20:37:22.145356] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.989 [2024-10-01 20:37:22.145360] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0480) on tqpair=0x13b98f0 00:38:26.989 [2024-10-01 20:37:22.145370] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.989 [2024-10-01 20:37:22.145375] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.989 [2024-10-01 20:37:22.145379] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b98f0) 00:38:26.989 [2024-10-01 20:37:22.145386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.989 [2024-10-01 20:37:22.145404] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0480, cid 3, qid 0 00:38:26.989 [2024-10-01 20:37:22.145457] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.989 [2024-10-01 20:37:22.145464] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.989 [2024-10-01 20:37:22.145468] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.989 [2024-10-01 20:37:22.145472] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0480) on tqpair=0x13b98f0 00:38:26.989 [2024-10-01 20:37:22.145482] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.989 [2024-10-01 20:37:22.145487] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.989 [2024-10-01 20:37:22.145491] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b98f0) 00:38:26.989 [2024-10-01 20:37:22.145498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.989 [2024-10-01 20:37:22.145516] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0480, cid 3, qid 0 00:38:26.989 [2024-10-01 20:37:22.145583] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.989 [2024-10-01 20:37:22.145590] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.989 [2024-10-01 20:37:22.145593] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.989 [2024-10-01 20:37:22.145597] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0480) on tqpair=0x13b98f0 00:38:26.989 [2024-10-01 20:37:22.145607] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.989 [2024-10-01 20:37:22.145612] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.989 [2024-10-01 20:37:22.145615] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b98f0) 00:38:26.989 [2024-10-01 20:37:22.145622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.989 [2024-10-01 20:37:22.145639] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0480, cid 3, qid 0 00:38:26.989 [2024-10-01 20:37:22.145687] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.989 [2024-10-01 20:37:22.145693] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.989 [2024-10-01 20:37:22.145697] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.989 [2024-10-01 20:37:22.145701] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0480) on tqpair=0x13b98f0 00:38:26.989 [2024-10-01 20:37:22.145711] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.989 [2024-10-01 20:37:22.145716] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.989 [2024-10-01 20:37:22.145719] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b98f0) 00:38:26.989 [2024-10-01 20:37:22.145726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.989 [2024-10-01 20:37:22.145744] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0480, cid 3, qid 0 00:38:26.989 [2024-10-01 20:37:22.145794] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.989 [2024-10-01 20:37:22.145801] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.989 [2024-10-01 20:37:22.145804] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.989 [2024-10-01 20:37:22.145808] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0480) on tqpair=0x13b98f0 00:38:26.989 [2024-10-01 20:37:22.145819] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.989 [2024-10-01 20:37:22.145823] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.989 [2024-10-01 20:37:22.145827] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b98f0) 00:38:26.989 [2024-10-01 20:37:22.145834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.989 [2024-10-01 20:37:22.145851] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0480, cid 3, qid 0 00:38:26.989 [2024-10-01 20:37:22.145901] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.989 [2024-10-01 20:37:22.145908] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.989 [2024-10-01 20:37:22.145911] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.989 [2024-10-01 20:37:22.145915] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0480) on tqpair=0x13b98f0 00:38:26.989 [2024-10-01 20:37:22.145925] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.989 [2024-10-01 20:37:22.145930] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.989 [2024-10-01 20:37:22.145933] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b98f0) 00:38:26.989 [2024-10-01 20:37:22.145941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.989 [2024-10-01 20:37:22.145957] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0480, cid 3, qid 0 00:38:26.989 [2024-10-01 20:37:22.146011] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.989 [2024-10-01 20:37:22.146017] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.989 [2024-10-01 20:37:22.146022] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.989 [2024-10-01 20:37:22.146025] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0480) on tqpair=0x13b98f0 00:38:26.989 [2024-10-01 20:37:22.146035] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.989 [2024-10-01 20:37:22.146040] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.989 [2024-10-01 20:37:22.146044] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b98f0) 00:38:26.989 [2024-10-01 20:37:22.146051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.989 [2024-10-01 20:37:22.146068] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0480, cid 3, qid 0 00:38:26.989 [2024-10-01 20:37:22.146118] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.989 [2024-10-01 20:37:22.146124] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.989 [2024-10-01 20:37:22.146128] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.989 [2024-10-01 20:37:22.146132] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0480) on tqpair=0x13b98f0 00:38:26.989 [2024-10-01 20:37:22.146153] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.989 [2024-10-01 20:37:22.146159] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.989 [2024-10-01 20:37:22.146163] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b98f0) 00:38:26.989 [2024-10-01 20:37:22.146170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.989 [2024-10-01 20:37:22.146190] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0480, cid 3, qid 0 00:38:26.989 [2024-10-01 20:37:22.146242] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.989 [2024-10-01 20:37:22.146249] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.989 [2024-10-01 20:37:22.146253] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.989 [2024-10-01 20:37:22.146257] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0480) on tqpair=0x13b98f0 00:38:26.989 [2024-10-01 20:37:22.146267] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.989 [2024-10-01 20:37:22.146271] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.989 [2024-10-01 20:37:22.146275] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b98f0) 00:38:26.989 [2024-10-01 20:37:22.146282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.989 [2024-10-01 20:37:22.146300] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0480, cid 3, qid 0 00:38:26.989 [2024-10-01 20:37:22.146353] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.990 [2024-10-01 20:37:22.146359] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.990 [2024-10-01 20:37:22.146363] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.990 [2024-10-01 20:37:22.146367] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0480) on tqpair=0x13b98f0 00:38:26.990 [2024-10-01 20:37:22.146377] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.990 [2024-10-01 20:37:22.146381] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.990 [2024-10-01 20:37:22.146385] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b98f0) 00:38:26.990 [2024-10-01 20:37:22.146392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.990 [2024-10-01 20:37:22.146409] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0480, cid 3, qid 0 00:38:26.990 [2024-10-01 20:37:22.146462] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.990 [2024-10-01 20:37:22.146469] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.990 [2024-10-01 20:37:22.146472] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.990 [2024-10-01 20:37:22.146476] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0480) on tqpair=0x13b98f0 00:38:26.990 [2024-10-01 20:37:22.146486] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.990 [2024-10-01 20:37:22.146491] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.990 [2024-10-01 20:37:22.146495] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b98f0) 00:38:26.990 [2024-10-01 20:37:22.146503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.990 [2024-10-01 20:37:22.146520] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0480, cid 3, qid 0 00:38:26.990 [2024-10-01 20:37:22.146570] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.990 [2024-10-01 20:37:22.146577] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.990 [2024-10-01 20:37:22.146580] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.990 [2024-10-01 20:37:22.146584] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0480) on tqpair=0x13b98f0 00:38:26.990 [2024-10-01 20:37:22.146594] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.990 [2024-10-01 20:37:22.146599] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.990 [2024-10-01 20:37:22.146602] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b98f0) 00:38:26.990 [2024-10-01 20:37:22.146610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.990 [2024-10-01 20:37:22.146628] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0480, cid 3, qid 0 00:38:26.990 [2024-10-01 20:37:22.146679] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.990 [2024-10-01 20:37:22.146685] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.990 [2024-10-01 20:37:22.146689] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.990 [2024-10-01 20:37:22.146693] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0480) on tqpair=0x13b98f0 00:38:26.990 [2024-10-01 20:37:22.146703] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.990 [2024-10-01 20:37:22.146708] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.990 [2024-10-01 20:37:22.146711] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b98f0) 00:38:26.990 [2024-10-01 20:37:22.146719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.990 [2024-10-01 20:37:22.146736] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0480, cid 3, qid 0 00:38:26.990 [2024-10-01 20:37:22.146788] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.990 [2024-10-01 20:37:22.146795] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.990 [2024-10-01 20:37:22.146799] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.990 [2024-10-01 20:37:22.146803] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0480) on tqpair=0x13b98f0 00:38:26.990 [2024-10-01 20:37:22.146813] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.990 [2024-10-01 20:37:22.146817] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.990 [2024-10-01 20:37:22.146821] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b98f0) 00:38:26.990 [2024-10-01 20:37:22.146828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.990 [2024-10-01 20:37:22.146845] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0480, cid 3, qid 0 00:38:26.990 [2024-10-01 20:37:22.146896] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.990 [2024-10-01 20:37:22.146902] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.990 [2024-10-01 20:37:22.146906] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.990 [2024-10-01 20:37:22.146910] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0480) on tqpair=0x13b98f0 00:38:26.990 [2024-10-01 20:37:22.146920] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.990 [2024-10-01 20:37:22.146925] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.990 [2024-10-01 20:37:22.146929] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b98f0) 00:38:26.990 [2024-10-01 20:37:22.146936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.990 [2024-10-01 20:37:22.146953] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0480, cid 3, qid 0 00:38:26.990 [2024-10-01 20:37:22.147003] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.990 [2024-10-01 20:37:22.147010] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.990 [2024-10-01 20:37:22.147013] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.990 [2024-10-01 20:37:22.147017] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0480) on tqpair=0x13b98f0 00:38:26.990 [2024-10-01 20:37:22.147027] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.990 [2024-10-01 20:37:22.147032] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.990 [2024-10-01 20:37:22.147036] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b98f0) 00:38:26.990 [2024-10-01 20:37:22.147043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.990 [2024-10-01 20:37:22.147060] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0480, cid 3, qid 0 00:38:26.990 [2024-10-01 20:37:22.151140] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.990 [2024-10-01 20:37:22.151156] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.990 [2024-10-01 20:37:22.151161] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.990 [2024-10-01 20:37:22.151165] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0480) on tqpair=0x13b98f0 00:38:26.990 [2024-10-01 20:37:22.151179] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:26.990 [2024-10-01 20:37:22.151184] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:26.990 [2024-10-01 20:37:22.151188] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13b98f0) 00:38:26.990 [2024-10-01 20:37:22.151196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:26.990 [2024-10-01 20:37:22.151220] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13e0480, cid 3, qid 0 00:38:26.991 [2024-10-01 20:37:22.151292] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:26.991 [2024-10-01 20:37:22.151298] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:26.991 [2024-10-01 20:37:22.151302] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:26.991 [2024-10-01 20:37:22.151306] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13e0480) on tqpair=0x13b98f0 00:38:26.991 [2024-10-01 20:37:22.151313] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:38:26.991 00:38:26.991 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:38:26.991 [2024-10-01 20:37:22.193026] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:38:26.991 [2024-10-01 20:37:22.193079] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86631 ] 00:38:27.254 [2024-10-01 20:37:22.329508] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:38:27.254 [2024-10-01 20:37:22.329572] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:38:27.254 [2024-10-01 20:37:22.329579] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:38:27.254 [2024-10-01 20:37:22.329588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:38:27.254 [2024-10-01 20:37:22.329596] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:38:27.254 [2024-10-01 20:37:22.329851] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:38:27.254 [2024-10-01 20:37:22.329912] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1bcf8f0 0 00:38:27.254 [2024-10-01 20:37:22.336170] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:38:27.254 [2024-10-01 20:37:22.336201] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:38:27.254 [2024-10-01 20:37:22.336207] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:38:27.254 [2024-10-01 20:37:22.336210] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:38:27.254 [2024-10-01 20:37:22.336240] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.254 [2024-10-01 20:37:22.336247] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.254 [2024-10-01 20:37:22.336251] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bcf8f0) 00:38:27.254 [2024-10-01 20:37:22.336262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:38:27.254 [2024-10-01 20:37:22.336293] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6000, cid 0, qid 0 00:38:27.254 [2024-10-01 20:37:22.344173] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.254 [2024-10-01 20:37:22.344193] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.254 [2024-10-01 20:37:22.344215] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.254 [2024-10-01 20:37:22.344220] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6000) on tqpair=0x1bcf8f0 00:38:27.254 [2024-10-01 20:37:22.344232] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:38:27.254 [2024-10-01 20:37:22.344240] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:38:27.254 [2024-10-01 20:37:22.344246] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:38:27.254 [2024-10-01 20:37:22.344260] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.254 [2024-10-01 20:37:22.344264] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.254 [2024-10-01 20:37:22.344268] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bcf8f0) 00:38:27.254 [2024-10-01 20:37:22.344277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.254 [2024-10-01 20:37:22.344305] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6000, cid 0, qid 0 00:38:27.254 [2024-10-01 20:37:22.344367] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.254 [2024-10-01 20:37:22.344375] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.254 [2024-10-01 20:37:22.344378] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.255 [2024-10-01 20:37:22.344383] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6000) on tqpair=0x1bcf8f0 00:38:27.255 [2024-10-01 20:37:22.344388] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:38:27.255 [2024-10-01 20:37:22.344396] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:38:27.255 [2024-10-01 20:37:22.344404] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.255 [2024-10-01 20:37:22.344408] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.255 [2024-10-01 20:37:22.344412] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bcf8f0) 00:38:27.255 [2024-10-01 20:37:22.344419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.255 [2024-10-01 20:37:22.344455] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6000, cid 0, qid 0 00:38:27.255 [2024-10-01 20:37:22.344509] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.255 [2024-10-01 20:37:22.344516] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.255 [2024-10-01 20:37:22.344520] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.255 [2024-10-01 20:37:22.344524] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6000) on tqpair=0x1bcf8f0 00:38:27.255 [2024-10-01 20:37:22.344530] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:38:27.255 [2024-10-01 20:37:22.344539] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:38:27.255 [2024-10-01 20:37:22.344547] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.255 [2024-10-01 20:37:22.344551] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.255 [2024-10-01 20:37:22.344554] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bcf8f0) 00:38:27.255 [2024-10-01 20:37:22.344562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.255 [2024-10-01 20:37:22.344582] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6000, cid 0, qid 0 00:38:27.255 [2024-10-01 20:37:22.344634] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.255 [2024-10-01 20:37:22.344641] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.255 [2024-10-01 20:37:22.344645] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.255 [2024-10-01 20:37:22.344649] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6000) on tqpair=0x1bcf8f0 00:38:27.255 [2024-10-01 20:37:22.344654] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:38:27.255 [2024-10-01 20:37:22.344665] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.255 [2024-10-01 20:37:22.344670] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.255 [2024-10-01 20:37:22.344674] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bcf8f0) 00:38:27.255 [2024-10-01 20:37:22.344681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.255 [2024-10-01 20:37:22.344701] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6000, cid 0, qid 0 00:38:27.255 [2024-10-01 20:37:22.344756] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.255 [2024-10-01 20:37:22.344763] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.255 [2024-10-01 20:37:22.344767] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.255 [2024-10-01 20:37:22.344771] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6000) on tqpair=0x1bcf8f0 00:38:27.255 [2024-10-01 20:37:22.344776] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:38:27.255 [2024-10-01 20:37:22.344781] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:38:27.255 [2024-10-01 20:37:22.344789] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:38:27.255 [2024-10-01 20:37:22.344895] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:38:27.255 [2024-10-01 20:37:22.344899] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:38:27.255 [2024-10-01 20:37:22.344908] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.255 [2024-10-01 20:37:22.344912] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.255 [2024-10-01 20:37:22.344916] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bcf8f0) 00:38:27.255 [2024-10-01 20:37:22.344923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.255 [2024-10-01 20:37:22.344943] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6000, cid 0, qid 0 00:38:27.255 [2024-10-01 20:37:22.344994] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.255 [2024-10-01 20:37:22.345000] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.255 [2024-10-01 20:37:22.345004] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.255 [2024-10-01 20:37:22.345009] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6000) on tqpair=0x1bcf8f0 00:38:27.255 [2024-10-01 20:37:22.345014] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:38:27.255 [2024-10-01 20:37:22.345025] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.255 [2024-10-01 20:37:22.345029] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.255 [2024-10-01 20:37:22.345033] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bcf8f0) 00:38:27.255 [2024-10-01 20:37:22.345040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.255 [2024-10-01 20:37:22.345060] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6000, cid 0, qid 0 00:38:27.255 [2024-10-01 20:37:22.345109] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.255 [2024-10-01 20:37:22.345116] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.255 [2024-10-01 20:37:22.345120] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.255 [2024-10-01 20:37:22.345124] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6000) on tqpair=0x1bcf8f0 00:38:27.255 [2024-10-01 20:37:22.345129] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:38:27.255 [2024-10-01 20:37:22.345134] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:38:27.255 [2024-10-01 20:37:22.345156] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:38:27.255 [2024-10-01 20:37:22.345173] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:38:27.255 [2024-10-01 20:37:22.345183] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.255 [2024-10-01 20:37:22.345188] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bcf8f0) 00:38:27.255 [2024-10-01 20:37:22.345196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.255 [2024-10-01 20:37:22.345218] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6000, cid 0, qid 0 00:38:27.255 [2024-10-01 20:37:22.345313] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:38:27.255 [2024-10-01 20:37:22.345320] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:38:27.255 [2024-10-01 20:37:22.345324] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:38:27.255 [2024-10-01 20:37:22.345328] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bcf8f0): datao=0, datal=4096, cccid=0 00:38:27.255 [2024-10-01 20:37:22.345333] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bf6000) on tqpair(0x1bcf8f0): expected_datao=0, payload_size=4096 00:38:27.255 [2024-10-01 20:37:22.345338] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.255 [2024-10-01 20:37:22.345345] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:38:27.255 [2024-10-01 20:37:22.345349] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:38:27.255 [2024-10-01 20:37:22.345358] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.255 [2024-10-01 20:37:22.345364] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.255 [2024-10-01 20:37:22.345368] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.255 [2024-10-01 20:37:22.345372] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6000) on tqpair=0x1bcf8f0 00:38:27.255 [2024-10-01 20:37:22.345380] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:38:27.255 [2024-10-01 20:37:22.345385] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:38:27.256 [2024-10-01 20:37:22.345389] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:38:27.256 [2024-10-01 20:37:22.345393] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:38:27.256 [2024-10-01 20:37:22.345398] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:38:27.256 [2024-10-01 20:37:22.345403] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:38:27.256 [2024-10-01 20:37:22.345412] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:38:27.256 [2024-10-01 20:37:22.345424] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.256 [2024-10-01 20:37:22.345429] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.256 [2024-10-01 20:37:22.345432] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bcf8f0) 00:38:27.256 [2024-10-01 20:37:22.345440] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:38:27.256 [2024-10-01 20:37:22.345462] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6000, cid 0, qid 0 00:38:27.256 [2024-10-01 20:37:22.345518] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.256 [2024-10-01 20:37:22.345525] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.256 [2024-10-01 20:37:22.345529] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.256 [2024-10-01 20:37:22.345533] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6000) on tqpair=0x1bcf8f0 00:38:27.256 [2024-10-01 20:37:22.345540] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.256 [2024-10-01 20:37:22.345545] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.256 [2024-10-01 20:37:22.345548] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bcf8f0) 00:38:27.256 [2024-10-01 20:37:22.345555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:38:27.256 [2024-10-01 20:37:22.345562] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.256 [2024-10-01 20:37:22.345566] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.256 [2024-10-01 20:37:22.345569] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1bcf8f0) 00:38:27.256 [2024-10-01 20:37:22.345575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:38:27.256 [2024-10-01 20:37:22.345582] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.256 [2024-10-01 20:37:22.345586] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.256 [2024-10-01 20:37:22.345589] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1bcf8f0) 00:38:27.256 [2024-10-01 20:37:22.345595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:38:27.256 [2024-10-01 20:37:22.345601] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.256 [2024-10-01 20:37:22.345605] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.256 [2024-10-01 20:37:22.345609] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bcf8f0) 00:38:27.256 [2024-10-01 20:37:22.345614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:38:27.256 [2024-10-01 20:37:22.345619] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:38:27.256 [2024-10-01 20:37:22.345632] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:38:27.256 [2024-10-01 20:37:22.345640] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.256 [2024-10-01 20:37:22.345644] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bcf8f0) 00:38:27.256 [2024-10-01 20:37:22.345651] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.256 [2024-10-01 20:37:22.345673] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6000, cid 0, qid 0 00:38:27.256 [2024-10-01 20:37:22.345680] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6180, cid 1, qid 0 00:38:27.256 [2024-10-01 20:37:22.345685] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6300, cid 2, qid 0 00:38:27.256 [2024-10-01 20:37:22.345690] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6480, cid 3, qid 0 00:38:27.256 [2024-10-01 20:37:22.345694] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6600, cid 4, qid 0 00:38:27.256 [2024-10-01 20:37:22.345782] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.256 [2024-10-01 20:37:22.345789] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.256 [2024-10-01 20:37:22.345793] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.256 [2024-10-01 20:37:22.345797] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6600) on tqpair=0x1bcf8f0 00:38:27.256 [2024-10-01 20:37:22.345802] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:38:27.256 [2024-10-01 20:37:22.345808] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:38:27.256 [2024-10-01 20:37:22.345821] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:38:27.256 [2024-10-01 20:37:22.345828] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:38:27.256 [2024-10-01 20:37:22.345835] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.256 [2024-10-01 20:37:22.345839] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.256 [2024-10-01 20:37:22.345843] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bcf8f0) 00:38:27.256 [2024-10-01 20:37:22.345850] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:38:27.256 [2024-10-01 20:37:22.345871] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6600, cid 4, qid 0 00:38:27.256 [2024-10-01 20:37:22.345933] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.256 [2024-10-01 20:37:22.345941] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.256 [2024-10-01 20:37:22.345944] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.256 [2024-10-01 20:37:22.345949] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6600) on tqpair=0x1bcf8f0 00:38:27.256 [2024-10-01 20:37:22.346012] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:38:27.256 [2024-10-01 20:37:22.346023] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:38:27.256 [2024-10-01 20:37:22.346032] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.256 [2024-10-01 20:37:22.346036] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bcf8f0) 00:38:27.256 [2024-10-01 20:37:22.346044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.256 [2024-10-01 20:37:22.346064] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6600, cid 4, qid 0 00:38:27.256 [2024-10-01 20:37:22.346161] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:38:27.256 [2024-10-01 20:37:22.346170] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:38:27.256 [2024-10-01 20:37:22.346174] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:38:27.256 [2024-10-01 20:37:22.346178] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bcf8f0): datao=0, datal=4096, cccid=4 00:38:27.256 [2024-10-01 20:37:22.346183] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bf6600) on tqpair(0x1bcf8f0): expected_datao=0, payload_size=4096 00:38:27.256 [2024-10-01 20:37:22.346188] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.256 [2024-10-01 20:37:22.346195] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:38:27.256 [2024-10-01 20:37:22.346199] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:38:27.256 [2024-10-01 20:37:22.346208] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.256 [2024-10-01 20:37:22.346214] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.256 [2024-10-01 20:37:22.346218] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.256 [2024-10-01 20:37:22.346222] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6600) on tqpair=0x1bcf8f0 00:38:27.256 [2024-10-01 20:37:22.346239] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:38:27.256 [2024-10-01 20:37:22.346250] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:38:27.256 [2024-10-01 20:37:22.346260] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:38:27.256 [2024-10-01 20:37:22.346269] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.256 [2024-10-01 20:37:22.346273] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bcf8f0) 00:38:27.256 [2024-10-01 20:37:22.346281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.256 [2024-10-01 20:37:22.346304] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6600, cid 4, qid 0 00:38:27.256 [2024-10-01 20:37:22.346408] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:38:27.257 [2024-10-01 20:37:22.346416] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:38:27.257 [2024-10-01 20:37:22.346420] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:38:27.257 [2024-10-01 20:37:22.346424] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bcf8f0): datao=0, datal=4096, cccid=4 00:38:27.257 [2024-10-01 20:37:22.346429] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bf6600) on tqpair(0x1bcf8f0): expected_datao=0, payload_size=4096 00:38:27.257 [2024-10-01 20:37:22.346433] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.257 [2024-10-01 20:37:22.346440] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:38:27.257 [2024-10-01 20:37:22.346445] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:38:27.257 [2024-10-01 20:37:22.346453] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.257 [2024-10-01 20:37:22.346459] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.257 [2024-10-01 20:37:22.346463] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.257 [2024-10-01 20:37:22.346483] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6600) on tqpair=0x1bcf8f0 00:38:27.257 [2024-10-01 20:37:22.346494] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:38:27.257 [2024-10-01 20:37:22.346504] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:38:27.257 [2024-10-01 20:37:22.346513] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.257 [2024-10-01 20:37:22.346517] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bcf8f0) 00:38:27.257 [2024-10-01 20:37:22.346524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.257 [2024-10-01 20:37:22.346545] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6600, cid 4, qid 0 00:38:27.257 [2024-10-01 20:37:22.346609] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:38:27.257 [2024-10-01 20:37:22.346616] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:38:27.257 [2024-10-01 20:37:22.346620] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:38:27.257 [2024-10-01 20:37:22.346624] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bcf8f0): datao=0, datal=4096, cccid=4 00:38:27.257 [2024-10-01 20:37:22.346629] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bf6600) on tqpair(0x1bcf8f0): expected_datao=0, payload_size=4096 00:38:27.257 [2024-10-01 20:37:22.346633] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.257 [2024-10-01 20:37:22.346640] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:38:27.257 [2024-10-01 20:37:22.346644] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:38:27.257 [2024-10-01 20:37:22.346652] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.257 [2024-10-01 20:37:22.346658] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.257 [2024-10-01 20:37:22.346662] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.257 [2024-10-01 20:37:22.346666] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6600) on tqpair=0x1bcf8f0 00:38:27.257 [2024-10-01 20:37:22.346679] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:38:27.257 [2024-10-01 20:37:22.346689] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:38:27.257 [2024-10-01 20:37:22.346697] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:38:27.257 [2024-10-01 20:37:22.346704] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:38:27.257 [2024-10-01 20:37:22.346709] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:38:27.257 [2024-10-01 20:37:22.346714] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:38:27.257 [2024-10-01 20:37:22.346719] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:38:27.257 [2024-10-01 20:37:22.346723] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:38:27.257 [2024-10-01 20:37:22.346728] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:38:27.257 [2024-10-01 20:37:22.346742] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.257 [2024-10-01 20:37:22.346747] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bcf8f0) 00:38:27.257 [2024-10-01 20:37:22.346754] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.257 [2024-10-01 20:37:22.346761] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.257 [2024-10-01 20:37:22.346765] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.257 [2024-10-01 20:37:22.346769] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1bcf8f0) 00:38:27.257 [2024-10-01 20:37:22.346775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:38:27.257 [2024-10-01 20:37:22.346800] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6600, cid 4, qid 0 00:38:27.257 [2024-10-01 20:37:22.346808] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6780, cid 5, qid 0 00:38:27.257 [2024-10-01 20:37:22.346880] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.257 [2024-10-01 20:37:22.346887] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.257 [2024-10-01 20:37:22.346891] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.257 [2024-10-01 20:37:22.346895] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6600) on tqpair=0x1bcf8f0 00:38:27.257 [2024-10-01 20:37:22.346903] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.257 [2024-10-01 20:37:22.346909] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.257 [2024-10-01 20:37:22.346912] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.257 [2024-10-01 20:37:22.346916] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6780) on tqpair=0x1bcf8f0 00:38:27.257 [2024-10-01 20:37:22.346927] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.257 [2024-10-01 20:37:22.346932] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1bcf8f0) 00:38:27.257 [2024-10-01 20:37:22.346939] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.257 [2024-10-01 20:37:22.346958] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6780, cid 5, qid 0 00:38:27.257 [2024-10-01 20:37:22.347015] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.257 [2024-10-01 20:37:22.347022] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.257 [2024-10-01 20:37:22.347026] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.257 [2024-10-01 20:37:22.347030] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6780) on tqpair=0x1bcf8f0 00:38:27.257 [2024-10-01 20:37:22.347041] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.257 [2024-10-01 20:37:22.347046] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1bcf8f0) 00:38:27.257 [2024-10-01 20:37:22.347053] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.257 [2024-10-01 20:37:22.347072] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6780, cid 5, qid 0 00:38:27.257 [2024-10-01 20:37:22.351183] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.257 [2024-10-01 20:37:22.351213] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.257 [2024-10-01 20:37:22.351235] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.257 [2024-10-01 20:37:22.351240] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6780) on tqpair=0x1bcf8f0 00:38:27.257 [2024-10-01 20:37:22.351256] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.257 [2024-10-01 20:37:22.351261] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1bcf8f0) 00:38:27.257 [2024-10-01 20:37:22.351270] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.257 [2024-10-01 20:37:22.351298] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6780, cid 5, qid 0 00:38:27.257 [2024-10-01 20:37:22.351361] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.257 [2024-10-01 20:37:22.351369] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.257 [2024-10-01 20:37:22.351373] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.257 [2024-10-01 20:37:22.351377] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6780) on tqpair=0x1bcf8f0 00:38:27.257 [2024-10-01 20:37:22.351398] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.257 [2024-10-01 20:37:22.351404] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1bcf8f0) 00:38:27.257 [2024-10-01 20:37:22.351413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.257 [2024-10-01 20:37:22.351421] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.257 [2024-10-01 20:37:22.351425] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bcf8f0) 00:38:27.257 [2024-10-01 20:37:22.351448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.258 [2024-10-01 20:37:22.351456] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.258 [2024-10-01 20:37:22.351475] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1bcf8f0) 00:38:27.258 [2024-10-01 20:37:22.351481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.258 [2024-10-01 20:37:22.351489] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.258 [2024-10-01 20:37:22.351493] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1bcf8f0) 00:38:27.258 [2024-10-01 20:37:22.351499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.258 [2024-10-01 20:37:22.351522] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6780, cid 5, qid 0 00:38:27.258 [2024-10-01 20:37:22.351530] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6600, cid 4, qid 0 00:38:27.258 [2024-10-01 20:37:22.351534] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6900, cid 6, qid 0 00:38:27.258 [2024-10-01 20:37:22.351539] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6a80, cid 7, qid 0 00:38:27.258 [2024-10-01 20:37:22.351691] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:38:27.258 [2024-10-01 20:37:22.351698] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:38:27.258 [2024-10-01 20:37:22.351702] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:38:27.258 [2024-10-01 20:37:22.351706] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bcf8f0): datao=0, datal=8192, cccid=5 00:38:27.258 [2024-10-01 20:37:22.351711] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bf6780) on tqpair(0x1bcf8f0): expected_datao=0, payload_size=8192 00:38:27.258 [2024-10-01 20:37:22.351715] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.258 [2024-10-01 20:37:22.351775] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:38:27.258 [2024-10-01 20:37:22.351782] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:38:27.258 [2024-10-01 20:37:22.351788] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:38:27.258 [2024-10-01 20:37:22.351795] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:38:27.258 [2024-10-01 20:37:22.351799] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:38:27.258 [2024-10-01 20:37:22.351803] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bcf8f0): datao=0, datal=512, cccid=4 00:38:27.258 [2024-10-01 20:37:22.351809] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bf6600) on tqpair(0x1bcf8f0): expected_datao=0, payload_size=512 00:38:27.258 [2024-10-01 20:37:22.351813] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.258 [2024-10-01 20:37:22.351820] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:38:27.258 [2024-10-01 20:37:22.351824] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:38:27.258 [2024-10-01 20:37:22.351830] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:38:27.258 [2024-10-01 20:37:22.351837] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:38:27.258 [2024-10-01 20:37:22.351841] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:38:27.258 [2024-10-01 20:37:22.351845] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bcf8f0): datao=0, datal=512, cccid=6 00:38:27.258 [2024-10-01 20:37:22.351849] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bf6900) on tqpair(0x1bcf8f0): expected_datao=0, payload_size=512 00:38:27.258 [2024-10-01 20:37:22.351854] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.258 [2024-10-01 20:37:22.351861] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:38:27.258 [2024-10-01 20:37:22.351865] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:38:27.258 [2024-10-01 20:37:22.351871] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:38:27.258 [2024-10-01 20:37:22.351877] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:38:27.258 [2024-10-01 20:37:22.351881] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:38:27.258 [2024-10-01 20:37:22.351885] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bcf8f0): datao=0, datal=4096, cccid=7 00:38:27.258 [2024-10-01 20:37:22.351889] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bf6a80) on tqpair(0x1bcf8f0): expected_datao=0, payload_size=4096 00:38:27.258 [2024-10-01 20:37:22.351894] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.258 [2024-10-01 20:37:22.351901] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:38:27.258 [2024-10-01 20:37:22.351905] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:38:27.258 [2024-10-01 20:37:22.351914] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.258 [2024-10-01 20:37:22.351921] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.258 [2024-10-01 20:37:22.351925] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.258 [2024-10-01 20:37:22.351929] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6780) on tqpair=0x1bcf8f0 00:38:27.258 [2024-10-01 20:37:22.351949] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.258 [2024-10-01 20:37:22.351957] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.258 [2024-10-01 20:37:22.351961] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.258 ===================================================== 00:38:27.258 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:38:27.258 ===================================================== 00:38:27.258 Controller Capabilities/Features 00:38:27.258 ================================ 00:38:27.258 Vendor ID: 8086 00:38:27.258 Subsystem Vendor ID: 8086 00:38:27.258 Serial Number: SPDK00000000000001 00:38:27.258 Model Number: SPDK bdev Controller 00:38:27.258 Firmware Version: 25.01 00:38:27.258 Recommended Arb Burst: 6 00:38:27.258 IEEE OUI Identifier: e4 d2 5c 00:38:27.258 Multi-path I/O 00:38:27.258 May have multiple subsystem ports: Yes 00:38:27.258 May have multiple controllers: Yes 00:38:27.258 Associated with SR-IOV VF: No 00:38:27.258 Max Data Transfer Size: 131072 00:38:27.258 Max Number of Namespaces: 32 00:38:27.258 Max Number of I/O Queues: 127 00:38:27.258 NVMe Specification Version (VS): 1.3 00:38:27.258 NVMe Specification Version (Identify): 1.3 00:38:27.258 Maximum Queue Entries: 128 00:38:27.258 Contiguous Queues Required: Yes 00:38:27.258 Arbitration Mechanisms Supported 00:38:27.258 Weighted Round Robin: Not Supported 00:38:27.258 Vendor Specific: Not Supported 00:38:27.258 Reset Timeout: 15000 ms 00:38:27.258 Doorbell Stride: 4 bytes 00:38:27.258 NVM Subsystem Reset: Not Supported 00:38:27.258 Command Sets Supported 00:38:27.258 NVM Command Set: Supported 00:38:27.258 Boot Partition: Not Supported 00:38:27.258 Memory Page Size Minimum: 4096 bytes 00:38:27.258 Memory Page Size Maximum: 4096 bytes 00:38:27.258 Persistent Memory Region: Not Supported 00:38:27.258 Optional Asynchronous Events Supported 00:38:27.258 Namespace Attribute Notices: Supported 00:38:27.258 Firmware Activation Notices: Not Supported 00:38:27.258 ANA Change Notices: Not Supported 00:38:27.258 PLE Aggregate Log Change Notices: Not Supported 00:38:27.258 LBA Status Info Alert Notices: Not Supported 00:38:27.258 EGE Aggregate Log Change Notices: Not Supported 00:38:27.258 Normal NVM Subsystem Shutdown event: Not Supported 00:38:27.258 Zone Descriptor Change Notices: Not Supported 00:38:27.258 Discovery Log Change Notices: Not Supported 00:38:27.258 Controller Attributes 00:38:27.258 128-bit Host Identifier: Supported 00:38:27.258 Non-Operational Permissive Mode: Not Supported 00:38:27.258 NVM Sets: Not Supported 00:38:27.258 Read Recovery Levels: Not Supported 00:38:27.258 Endurance Groups: Not Supported 00:38:27.258 Predictable Latency Mode: Not Supported 00:38:27.258 Traffic Based Keep ALive: Not Supported 00:38:27.258 Namespace Granularity: Not Supported 00:38:27.258 SQ Associations: Not Supported 00:38:27.258 UUID List: Not Supported 00:38:27.258 Multi-Domain Subsystem: Not Supported 00:38:27.258 Fixed Capacity Management: Not Supported 00:38:27.258 Variable Capacity Management: Not Supported 00:38:27.258 Delete Endurance Group: Not Supported 00:38:27.258 Delete NVM Set: Not Supported 00:38:27.258 Extended LBA Formats Supported: Not Supported 00:38:27.258 Flexible Data Placement Supported: Not Supported 00:38:27.258 00:38:27.258 Controller Memory Buffer Support 00:38:27.258 ================================ 00:38:27.258 Supported: No 00:38:27.258 00:38:27.258 Persistent Memory Region Support 00:38:27.258 ================================ 00:38:27.258 Supported: No 00:38:27.258 00:38:27.258 Admin Command Set Attributes 00:38:27.258 ============================ 00:38:27.258 Security Send/Receive: Not Supported 00:38:27.258 Format NVM: Not Supported 00:38:27.258 Firmware Activate/Download: Not Supported 00:38:27.258 Namespace Management: Not Supported 00:38:27.258 Device Self-Test: Not Supported 00:38:27.258 Directives: Not Supported 00:38:27.258 NVMe-MI: Not Supported 00:38:27.259 Virtualization Management: Not Supported 00:38:27.259 Doorbell Buffer Config: Not Supported 00:38:27.259 Get LBA Status Capability: Not Supported 00:38:27.259 Command & Feature Lockdown Capability: Not Supported 00:38:27.259 Abort Command Limit: 4 00:38:27.259 Async Event Request Limit: 4 00:38:27.259 Number of Firmware Slots: N/A 00:38:27.259 Firmware Slot 1 Read-Only: N/A 00:38:27.259 Firmware Activation Without Reset: N/A 00:38:27.259 Multiple Update Detection Support: N/A 00:38:27.259 Firmware Update Granularity: No Information Provided 00:38:27.259 Per-Namespace SMART Log: No 00:38:27.259 Asymmetric Namespace Access Log Page: Not Supported 00:38:27.259 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:38:27.259 Command Effects Log Page: Supported 00:38:27.259 Get Log Page Extended Data: Supported 00:38:27.259 Telemetry Log Pages: Not Supported 00:38:27.259 Persistent Event Log Pages: Not Supported 00:38:27.259 Supported Log Pages Log Page: May Support 00:38:27.259 Commands Supported & Effects Log Page: Not Supported 00:38:27.259 Feature Identifiers & Effects Log Page:May Support 00:38:27.259 NVMe-MI Commands & Effects Log Page: May Support 00:38:27.259 Data Area 4 for Telemetry Log: Not Supported 00:38:27.259 Error Log Page Entries Supported: 128 00:38:27.259 Keep Alive: Supported 00:38:27.259 Keep Alive Granularity: 10000 ms 00:38:27.259 00:38:27.259 NVM Command Set Attributes 00:38:27.259 ========================== 00:38:27.259 Submission Queue Entry Size 00:38:27.259 Max: 64 00:38:27.259 Min: 64 00:38:27.259 Completion Queue Entry Size 00:38:27.259 Max: 16 00:38:27.259 Min: 16 00:38:27.259 Number of Namespaces: 32 00:38:27.259 Compare Command: Supported 00:38:27.259 Write Uncorrectable Command: Not Supported 00:38:27.259 Dataset Management Command: Supported 00:38:27.259 Write Zeroes Command: Supported 00:38:27.259 Set Features Save Field: Not Supported 00:38:27.259 Reservations: Supported 00:38:27.259 Timestamp: Not Supported 00:38:27.259 Copy: Supported 00:38:27.259 Volatile Write Cache: Present 00:38:27.259 Atomic Write Unit (Normal): 1 00:38:27.259 Atomic Write Unit (PFail): 1 00:38:27.259 Atomic Compare & Write Unit: 1 00:38:27.259 Fused Compare & Write: Supported 00:38:27.259 Scatter-Gather List 00:38:27.259 SGL Command Set: Supported 00:38:27.259 SGL Keyed: Supported 00:38:27.259 SGL Bit Bucket Descriptor: Not Supported 00:38:27.259 SGL Metadata Pointer: Not Supported 00:38:27.259 Oversized SGL: Not Supported 00:38:27.259 SGL Metadata Address: Not Supported 00:38:27.259 SGL Offset: Supported 00:38:27.259 Transport SGL Data Block: Not Supported 00:38:27.259 Replay Protected Memory Block: Not Supported 00:38:27.259 00:38:27.259 Firmware Slot Information 00:38:27.259 ========================= 00:38:27.259 Active slot: 1 00:38:27.259 Slot 1 Firmware Revision: 25.01 00:38:27.259 00:38:27.259 00:38:27.259 Commands Supported and Effects 00:38:27.259 ============================== 00:38:27.259 Admin Commands 00:38:27.259 -------------- 00:38:27.259 Get Log Page (02h): Supported 00:38:27.259 Identify (06h): Supported 00:38:27.259 Abort (08h): Supported 00:38:27.259 Set Features (09h): Supported 00:38:27.259 Get Features (0Ah): Supported 00:38:27.259 Asynchronous Event Request (0Ch): Supported 00:38:27.259 Keep Alive (18h): Supported 00:38:27.259 I/O Commands 00:38:27.259 ------------ 00:38:27.259 Flush (00h): Supported LBA-Change 00:38:27.259 Write (01h): Supported LBA-Change 00:38:27.259 Read (02h): Supported 00:38:27.259 Compare (05h): Supported 00:38:27.259 Write Zeroes (08h): Supported LBA-Change 00:38:27.259 Dataset Management (09h): Supported LBA-Change 00:38:27.259 Copy (19h): Supported LBA-Change 00:38:27.259 00:38:27.259 Error Log 00:38:27.259 ========= 00:38:27.259 00:38:27.259 Arbitration 00:38:27.259 =========== 00:38:27.259 Arbitration Burst: 1 00:38:27.259 00:38:27.259 Power Management 00:38:27.259 ================ 00:38:27.259 Number of Power States: 1 00:38:27.259 Current Power State: Power State #0 00:38:27.259 Power State #0: 00:38:27.259 Max Power: 0.00 W 00:38:27.259 Non-Operational State: Operational 00:38:27.259 Entry Latency: Not Reported 00:38:27.259 Exit Latency: Not Reported 00:38:27.259 Relative Read Throughput: 0 00:38:27.259 Relative Read Latency: 0 00:38:27.259 Relative Write Throughput: 0 00:38:27.259 Relative Write Latency: 0 00:38:27.259 Idle Power: Not Reported 00:38:27.259 Active Power: Not Reported 00:38:27.259 Non-Operational Permissive Mode: Not Supported 00:38:27.259 00:38:27.259 Health Information 00:38:27.259 ================== 00:38:27.259 Critical Warnings: 00:38:27.259 Available Spare Space: OK 00:38:27.259 Temperature: OK 00:38:27.259 Device Reliability: OK 00:38:27.259 Read Only: No 00:38:27.259 Volatile Memory Backup: OK 00:38:27.259 Current Temperature: 0 Kelvin (-273 Celsius) 00:38:27.259 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:38:27.259 Available Spare: 0% 00:38:27.259 Available Spare Threshold: 0% 00:38:27.259 Life Percentage Used:[2024-10-01 20:37:22.351965] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6600) on tqpair=0x1bcf8f0 00:38:27.259 [2024-10-01 20:37:22.351978] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.259 [2024-10-01 20:37:22.351985] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.259 [2024-10-01 20:37:22.351989] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.259 [2024-10-01 20:37:22.351994] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6900) on tqpair=0x1bcf8f0 00:38:27.259 [2024-10-01 20:37:22.352002] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.259 [2024-10-01 20:37:22.352008] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.259 [2024-10-01 20:37:22.352012] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.259 [2024-10-01 20:37:22.352016] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6a80) on tqpair=0x1bcf8f0 00:38:27.259 [2024-10-01 20:37:22.352168] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.259 [2024-10-01 20:37:22.352175] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1bcf8f0) 00:38:27.259 [2024-10-01 20:37:22.352183] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.259 [2024-10-01 20:37:22.352266] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6a80, cid 7, qid 0 00:38:27.259 [2024-10-01 20:37:22.352352] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.259 [2024-10-01 20:37:22.352360] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.259 [2024-10-01 20:37:22.352364] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.259 [2024-10-01 20:37:22.352369] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6a80) on tqpair=0x1bcf8f0 00:38:27.259 [2024-10-01 20:37:22.352428] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:38:27.259 [2024-10-01 20:37:22.352446] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6000) on tqpair=0x1bcf8f0 00:38:27.259 [2024-10-01 20:37:22.352454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:27.259 [2024-10-01 20:37:22.352460] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6180) on tqpair=0x1bcf8f0 00:38:27.259 [2024-10-01 20:37:22.352465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:27.259 [2024-10-01 20:37:22.352471] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6300) on tqpair=0x1bcf8f0 00:38:27.259 [2024-10-01 20:37:22.352476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:27.259 [2024-10-01 20:37:22.352481] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6480) on tqpair=0x1bcf8f0 00:38:27.259 [2024-10-01 20:37:22.352501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:27.260 [2024-10-01 20:37:22.352511] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.260 [2024-10-01 20:37:22.352516] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.260 [2024-10-01 20:37:22.352520] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bcf8f0) 00:38:27.260 [2024-10-01 20:37:22.352529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.260 [2024-10-01 20:37:22.352557] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6480, cid 3, qid 0 00:38:27.260 [2024-10-01 20:37:22.352651] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.260 [2024-10-01 20:37:22.352659] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.260 [2024-10-01 20:37:22.352663] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.260 [2024-10-01 20:37:22.352667] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6480) on tqpair=0x1bcf8f0 00:38:27.260 [2024-10-01 20:37:22.352675] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.260 [2024-10-01 20:37:22.352679] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.260 [2024-10-01 20:37:22.352683] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bcf8f0) 00:38:27.260 [2024-10-01 20:37:22.352690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.260 [2024-10-01 20:37:22.352713] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6480, cid 3, qid 0 00:38:27.260 [2024-10-01 20:37:22.352792] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.260 [2024-10-01 20:37:22.352799] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.260 [2024-10-01 20:37:22.352803] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.260 [2024-10-01 20:37:22.352807] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6480) on tqpair=0x1bcf8f0 00:38:27.260 [2024-10-01 20:37:22.352812] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:38:27.260 [2024-10-01 20:37:22.352817] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:38:27.260 [2024-10-01 20:37:22.352828] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.260 [2024-10-01 20:37:22.352833] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.260 [2024-10-01 20:37:22.352836] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bcf8f0) 00:38:27.260 [2024-10-01 20:37:22.352844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.260 [2024-10-01 20:37:22.352863] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6480, cid 3, qid 0 00:38:27.260 [2024-10-01 20:37:22.352920] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.260 [2024-10-01 20:37:22.352937] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.260 [2024-10-01 20:37:22.352943] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.260 [2024-10-01 20:37:22.352947] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6480) on tqpair=0x1bcf8f0 00:38:27.260 [2024-10-01 20:37:22.352959] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.260 [2024-10-01 20:37:22.352964] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.260 [2024-10-01 20:37:22.352967] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bcf8f0) 00:38:27.260 [2024-10-01 20:37:22.352975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.260 [2024-10-01 20:37:22.352995] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6480, cid 3, qid 0 00:38:27.260 [2024-10-01 20:37:22.353047] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.260 [2024-10-01 20:37:22.353058] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.260 [2024-10-01 20:37:22.353063] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.260 [2024-10-01 20:37:22.353067] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6480) on tqpair=0x1bcf8f0 00:38:27.260 [2024-10-01 20:37:22.353078] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.260 [2024-10-01 20:37:22.353083] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.260 [2024-10-01 20:37:22.353087] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bcf8f0) 00:38:27.260 [2024-10-01 20:37:22.353094] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.260 [2024-10-01 20:37:22.353144] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6480, cid 3, qid 0 00:38:27.260 [2024-10-01 20:37:22.353231] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.260 [2024-10-01 20:37:22.353239] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.260 [2024-10-01 20:37:22.353244] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.260 [2024-10-01 20:37:22.353248] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6480) on tqpair=0x1bcf8f0 00:38:27.260 [2024-10-01 20:37:22.353260] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.260 [2024-10-01 20:37:22.353266] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.260 [2024-10-01 20:37:22.353270] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bcf8f0) 00:38:27.260 [2024-10-01 20:37:22.353277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.260 [2024-10-01 20:37:22.353299] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6480, cid 3, qid 0 00:38:27.260 [2024-10-01 20:37:22.353355] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.260 [2024-10-01 20:37:22.353362] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.260 [2024-10-01 20:37:22.353366] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.260 [2024-10-01 20:37:22.353370] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6480) on tqpair=0x1bcf8f0 00:38:27.260 [2024-10-01 20:37:22.353381] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.260 [2024-10-01 20:37:22.353387] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.260 [2024-10-01 20:37:22.353391] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bcf8f0) 00:38:27.260 [2024-10-01 20:37:22.353398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.260 [2024-10-01 20:37:22.353418] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6480, cid 3, qid 0 00:38:27.260 [2024-10-01 20:37:22.353488] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.260 [2024-10-01 20:37:22.353510] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.260 [2024-10-01 20:37:22.353513] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.260 [2024-10-01 20:37:22.353518] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6480) on tqpair=0x1bcf8f0 00:38:27.260 [2024-10-01 20:37:22.353528] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.260 [2024-10-01 20:37:22.353533] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.260 [2024-10-01 20:37:22.353537] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bcf8f0) 00:38:27.260 [2024-10-01 20:37:22.353544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.260 [2024-10-01 20:37:22.353563] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6480, cid 3, qid 0 00:38:27.260 [2024-10-01 20:37:22.353615] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.260 [2024-10-01 20:37:22.353622] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.260 [2024-10-01 20:37:22.353625] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.260 [2024-10-01 20:37:22.353629] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6480) on tqpair=0x1bcf8f0 00:38:27.260 [2024-10-01 20:37:22.353640] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.260 [2024-10-01 20:37:22.353645] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.260 [2024-10-01 20:37:22.353648] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bcf8f0) 00:38:27.260 [2024-10-01 20:37:22.353656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.260 [2024-10-01 20:37:22.353674] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6480, cid 3, qid 0 00:38:27.260 [2024-10-01 20:37:22.353725] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.260 [2024-10-01 20:37:22.353731] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.261 [2024-10-01 20:37:22.353736] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.261 [2024-10-01 20:37:22.353740] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6480) on tqpair=0x1bcf8f0 00:38:27.261 [2024-10-01 20:37:22.353750] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.261 [2024-10-01 20:37:22.353755] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.261 [2024-10-01 20:37:22.353759] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bcf8f0) 00:38:27.261 [2024-10-01 20:37:22.353766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.261 [2024-10-01 20:37:22.353785] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6480, cid 3, qid 0 00:38:27.261 [2024-10-01 20:37:22.353839] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.261 [2024-10-01 20:37:22.353845] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.261 [2024-10-01 20:37:22.353849] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.261 [2024-10-01 20:37:22.353853] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6480) on tqpair=0x1bcf8f0 00:38:27.261 [2024-10-01 20:37:22.353864] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.261 [2024-10-01 20:37:22.353868] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.261 [2024-10-01 20:37:22.353872] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bcf8f0) 00:38:27.261 [2024-10-01 20:37:22.353880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.261 [2024-10-01 20:37:22.353898] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6480, cid 3, qid 0 00:38:27.261 [2024-10-01 20:37:22.353950] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.261 [2024-10-01 20:37:22.353957] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.261 [2024-10-01 20:37:22.353960] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.261 [2024-10-01 20:37:22.353964] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6480) on tqpair=0x1bcf8f0 00:38:27.261 [2024-10-01 20:37:22.353975] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.261 [2024-10-01 20:37:22.353980] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.261 [2024-10-01 20:37:22.353983] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bcf8f0) 00:38:27.261 [2024-10-01 20:37:22.353990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.261 [2024-10-01 20:37:22.354009] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6480, cid 3, qid 0 00:38:27.261 [2024-10-01 20:37:22.354062] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.261 [2024-10-01 20:37:22.354068] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.261 [2024-10-01 20:37:22.354072] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.261 [2024-10-01 20:37:22.354076] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6480) on tqpair=0x1bcf8f0 00:38:27.261 [2024-10-01 20:37:22.354087] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.261 [2024-10-01 20:37:22.354092] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.261 [2024-10-01 20:37:22.354095] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bcf8f0) 00:38:27.261 [2024-10-01 20:37:22.354102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.261 [2024-10-01 20:37:22.354138] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6480, cid 3, qid 0 00:38:27.261 [2024-10-01 20:37:22.354219] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.261 [2024-10-01 20:37:22.354230] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.261 [2024-10-01 20:37:22.354234] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.261 [2024-10-01 20:37:22.354238] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6480) on tqpair=0x1bcf8f0 00:38:27.261 [2024-10-01 20:37:22.354250] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.261 [2024-10-01 20:37:22.354255] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.261 [2024-10-01 20:37:22.354259] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bcf8f0) 00:38:27.261 [2024-10-01 20:37:22.354267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.261 [2024-10-01 20:37:22.354290] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6480, cid 3, qid 0 00:38:27.261 [2024-10-01 20:37:22.354345] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.261 [2024-10-01 20:37:22.354353] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.261 [2024-10-01 20:37:22.354357] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.261 [2024-10-01 20:37:22.354361] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6480) on tqpair=0x1bcf8f0 00:38:27.261 [2024-10-01 20:37:22.354372] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.261 [2024-10-01 20:37:22.354378] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.261 [2024-10-01 20:37:22.354382] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bcf8f0) 00:38:27.261 [2024-10-01 20:37:22.354389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.261 [2024-10-01 20:37:22.354409] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6480, cid 3, qid 0 00:38:27.261 [2024-10-01 20:37:22.354477] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.261 [2024-10-01 20:37:22.354485] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.261 [2024-10-01 20:37:22.354489] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.261 [2024-10-01 20:37:22.354493] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6480) on tqpair=0x1bcf8f0 00:38:27.261 [2024-10-01 20:37:22.354504] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.261 [2024-10-01 20:37:22.354509] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.261 [2024-10-01 20:37:22.354513] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bcf8f0) 00:38:27.261 [2024-10-01 20:37:22.354520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.261 [2024-10-01 20:37:22.354541] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6480, cid 3, qid 0 00:38:27.261 [2024-10-01 20:37:22.354605] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.261 [2024-10-01 20:37:22.354612] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.261 [2024-10-01 20:37:22.354616] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.261 [2024-10-01 20:37:22.354620] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6480) on tqpair=0x1bcf8f0 00:38:27.261 [2024-10-01 20:37:22.354630] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.261 [2024-10-01 20:37:22.354635] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.261 [2024-10-01 20:37:22.354639] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bcf8f0) 00:38:27.261 [2024-10-01 20:37:22.354646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.261 [2024-10-01 20:37:22.354664] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6480, cid 3, qid 0 00:38:27.261 [2024-10-01 20:37:22.354716] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.262 [2024-10-01 20:37:22.354723] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.262 [2024-10-01 20:37:22.354726] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.262 [2024-10-01 20:37:22.354730] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6480) on tqpair=0x1bcf8f0 00:38:27.262 [2024-10-01 20:37:22.354741] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.262 [2024-10-01 20:37:22.354746] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.262 [2024-10-01 20:37:22.354749] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bcf8f0) 00:38:27.262 [2024-10-01 20:37:22.354757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.262 [2024-10-01 20:37:22.354776] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6480, cid 3, qid 0 00:38:27.262 [2024-10-01 20:37:22.354826] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.262 [2024-10-01 20:37:22.354833] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.262 [2024-10-01 20:37:22.354836] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.262 [2024-10-01 20:37:22.354840] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6480) on tqpair=0x1bcf8f0 00:38:27.262 [2024-10-01 20:37:22.354851] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.262 [2024-10-01 20:37:22.354856] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.262 [2024-10-01 20:37:22.354860] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bcf8f0) 00:38:27.262 [2024-10-01 20:37:22.354867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.262 [2024-10-01 20:37:22.354886] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6480, cid 3, qid 0 00:38:27.262 [2024-10-01 20:37:22.354940] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.262 [2024-10-01 20:37:22.354947] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.262 [2024-10-01 20:37:22.354951] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.262 [2024-10-01 20:37:22.354955] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6480) on tqpair=0x1bcf8f0 00:38:27.262 [2024-10-01 20:37:22.354966] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.262 [2024-10-01 20:37:22.354971] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.262 [2024-10-01 20:37:22.354974] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bcf8f0) 00:38:27.262 [2024-10-01 20:37:22.354981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.262 [2024-10-01 20:37:22.355005] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6480, cid 3, qid 0 00:38:27.262 [2024-10-01 20:37:22.355056] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.262 [2024-10-01 20:37:22.355063] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.262 [2024-10-01 20:37:22.355067] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.262 [2024-10-01 20:37:22.355071] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6480) on tqpair=0x1bcf8f0 00:38:27.262 [2024-10-01 20:37:22.355081] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.262 [2024-10-01 20:37:22.355086] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.262 [2024-10-01 20:37:22.355090] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bcf8f0) 00:38:27.262 [2024-10-01 20:37:22.355097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.262 [2024-10-01 20:37:22.355116] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6480, cid 3, qid 0 00:38:27.262 [2024-10-01 20:37:22.355196] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.262 [2024-10-01 20:37:22.355206] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.262 [2024-10-01 20:37:22.355209] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.262 [2024-10-01 20:37:22.355214] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6480) on tqpair=0x1bcf8f0 00:38:27.262 [2024-10-01 20:37:22.355225] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.262 [2024-10-01 20:37:22.355230] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.262 [2024-10-01 20:37:22.355234] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bcf8f0) 00:38:27.262 [2024-10-01 20:37:22.355242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.262 [2024-10-01 20:37:22.355263] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6480, cid 3, qid 0 00:38:27.262 [2024-10-01 20:37:22.355320] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.262 [2024-10-01 20:37:22.355327] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.262 [2024-10-01 20:37:22.355331] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.262 [2024-10-01 20:37:22.355335] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6480) on tqpair=0x1bcf8f0 00:38:27.262 [2024-10-01 20:37:22.355346] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.262 [2024-10-01 20:37:22.355351] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.262 [2024-10-01 20:37:22.355355] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bcf8f0) 00:38:27.262 [2024-10-01 20:37:22.355362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.262 [2024-10-01 20:37:22.355382] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6480, cid 3, qid 0 00:38:27.262 [2024-10-01 20:37:22.355434] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.262 [2024-10-01 20:37:22.355461] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.262 [2024-10-01 20:37:22.355465] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.262 [2024-10-01 20:37:22.355470] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6480) on tqpair=0x1bcf8f0 00:38:27.262 [2024-10-01 20:37:22.355481] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.262 [2024-10-01 20:37:22.355486] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.262 [2024-10-01 20:37:22.355490] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bcf8f0) 00:38:27.262 [2024-10-01 20:37:22.355498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.262 [2024-10-01 20:37:22.355518] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6480, cid 3, qid 0 00:38:27.262 [2024-10-01 20:37:22.355568] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.262 [2024-10-01 20:37:22.355575] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.262 [2024-10-01 20:37:22.355579] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.262 [2024-10-01 20:37:22.355583] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6480) on tqpair=0x1bcf8f0 00:38:27.262 [2024-10-01 20:37:22.355593] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.262 [2024-10-01 20:37:22.355598] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.262 [2024-10-01 20:37:22.355602] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bcf8f0) 00:38:27.262 [2024-10-01 20:37:22.355609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.262 [2024-10-01 20:37:22.355628] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6480, cid 3, qid 0 00:38:27.263 [2024-10-01 20:37:22.355683] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.263 [2024-10-01 20:37:22.355694] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.263 [2024-10-01 20:37:22.355698] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.263 [2024-10-01 20:37:22.355702] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6480) on tqpair=0x1bcf8f0 00:38:27.263 [2024-10-01 20:37:22.355713] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.263 [2024-10-01 20:37:22.355719] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.263 [2024-10-01 20:37:22.355731] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bcf8f0) 00:38:27.263 [2024-10-01 20:37:22.355756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.263 [2024-10-01 20:37:22.355795] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6480, cid 3, qid 0 00:38:27.263 [2024-10-01 20:37:22.355853] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.263 [2024-10-01 20:37:22.355866] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.263 [2024-10-01 20:37:22.355871] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.263 [2024-10-01 20:37:22.355876] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6480) on tqpair=0x1bcf8f0 00:38:27.263 [2024-10-01 20:37:22.355888] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.263 [2024-10-01 20:37:22.355894] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.263 [2024-10-01 20:37:22.355898] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bcf8f0) 00:38:27.263 [2024-10-01 20:37:22.355906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.263 [2024-10-01 20:37:22.355927] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6480, cid 3, qid 0 00:38:27.263 [2024-10-01 20:37:22.355984] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.263 [2024-10-01 20:37:22.355992] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.263 [2024-10-01 20:37:22.355996] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.263 [2024-10-01 20:37:22.356001] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6480) on tqpair=0x1bcf8f0 00:38:27.263 [2024-10-01 20:37:22.356012] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.263 [2024-10-01 20:37:22.356018] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.263 [2024-10-01 20:37:22.356022] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bcf8f0) 00:38:27.263 [2024-10-01 20:37:22.356030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.263 [2024-10-01 20:37:22.356077] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6480, cid 3, qid 0 00:38:27.263 [2024-10-01 20:37:22.356162] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.263 [2024-10-01 20:37:22.360201] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.263 [2024-10-01 20:37:22.360209] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.263 [2024-10-01 20:37:22.360214] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6480) on tqpair=0x1bcf8f0 00:38:27.263 [2024-10-01 20:37:22.360228] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:27.263 [2024-10-01 20:37:22.360234] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:27.263 [2024-10-01 20:37:22.360238] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bcf8f0) 00:38:27.263 [2024-10-01 20:37:22.360247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.263 [2024-10-01 20:37:22.360274] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bf6480, cid 3, qid 0 00:38:27.263 [2024-10-01 20:37:22.360333] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:27.263 [2024-10-01 20:37:22.360340] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:27.263 [2024-10-01 20:37:22.360344] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:27.263 [2024-10-01 20:37:22.360348] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bf6480) on tqpair=0x1bcf8f0 00:38:27.263 [2024-10-01 20:37:22.360356] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:38:27.263 0% 00:38:27.263 Data Units Read: 0 00:38:27.263 Data Units Written: 0 00:38:27.263 Host Read Commands: 0 00:38:27.263 Host Write Commands: 0 00:38:27.263 Controller Busy Time: 0 minutes 00:38:27.263 Power Cycles: 0 00:38:27.263 Power On Hours: 0 hours 00:38:27.263 Unsafe Shutdowns: 0 00:38:27.263 Unrecoverable Media Errors: 0 00:38:27.263 Lifetime Error Log Entries: 0 00:38:27.263 Warning Temperature Time: 0 minutes 00:38:27.263 Critical Temperature Time: 0 minutes 00:38:27.263 00:38:27.263 Number of Queues 00:38:27.263 ================ 00:38:27.263 Number of I/O Submission Queues: 127 00:38:27.263 Number of I/O Completion Queues: 127 00:38:27.263 00:38:27.263 Active Namespaces 00:38:27.263 ================= 00:38:27.263 Namespace ID:1 00:38:27.263 Error Recovery Timeout: Unlimited 00:38:27.263 Command Set Identifier: NVM (00h) 00:38:27.263 Deallocate: Supported 00:38:27.263 Deallocated/Unwritten Error: Not Supported 00:38:27.263 Deallocated Read Value: Unknown 00:38:27.263 Deallocate in Write Zeroes: Not Supported 00:38:27.263 Deallocated Guard Field: 0xFFFF 00:38:27.263 Flush: Supported 00:38:27.263 Reservation: Supported 00:38:27.263 Namespace Sharing Capabilities: Multiple Controllers 00:38:27.263 Size (in LBAs): 131072 (0GiB) 00:38:27.263 Capacity (in LBAs): 131072 (0GiB) 00:38:27.263 Utilization (in LBAs): 131072 (0GiB) 00:38:27.263 NGUID: ABCDEF0123456789ABCDEF0123456789 00:38:27.263 EUI64: ABCDEF0123456789 00:38:27.263 UUID: 9d1f7d80-7b93-480a-bd2b-645c232041a1 00:38:27.263 Thin Provisioning: Not Supported 00:38:27.263 Per-NS Atomic Units: Yes 00:38:27.263 Atomic Boundary Size (Normal): 0 00:38:27.263 Atomic Boundary Size (PFail): 0 00:38:27.263 Atomic Boundary Offset: 0 00:38:27.263 Maximum Single Source Range Length: 65535 00:38:27.263 Maximum Copy Length: 65535 00:38:27.263 Maximum Source Range Count: 1 00:38:27.263 NGUID/EUI64 Never Reused: No 00:38:27.263 Namespace Write Protected: No 00:38:27.263 Number of LBA Formats: 1 00:38:27.263 Current LBA Format: LBA Format #00 00:38:27.263 LBA Format #00: Data Size: 512 Metadata Size: 0 00:38:27.263 00:38:27.263 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:38:27.264 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:27.264 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:27.264 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:38:27.264 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:27.264 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:38:27.264 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:38:27.264 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:27.264 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:38:27.264 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:27.264 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:38:27.264 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:27.264 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:27.264 rmmod nvme_tcp 00:38:27.264 rmmod nvme_fabrics 00:38:27.264 rmmod nvme_keyring 00:38:27.264 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:27.264 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:38:27.264 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:38:27.264 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 86584 ']' 00:38:27.264 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 86584 00:38:27.264 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 86584 ']' 00:38:27.264 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 86584 00:38:27.264 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:38:27.523 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:27.523 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86584 00:38:27.523 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:27.523 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:27.523 killing process with pid 86584 00:38:27.523 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86584' 00:38:27.523 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 86584 00:38:27.523 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 86584 00:38:27.523 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:27.523 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:27.523 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:27.523 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:38:27.523 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:38:27.523 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:27.523 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:38:27.523 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:27.523 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:38:27.523 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:38:27.523 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:38:27.782 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:38:27.782 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:38:27.782 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:38:27.782 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:38:27.782 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:38:27.782 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:38:27.782 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:38:27.782 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:38:27.782 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:38:27.782 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:27.782 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:27.782 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:38:27.782 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:27.782 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:27.782 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:27.783 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:38:27.783 00:38:27.783 real 0m2.279s 00:38:27.783 user 0m4.750s 00:38:27.783 sys 0m0.740s 00:38:27.783 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:27.783 20:37:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:38:27.783 ************************************ 00:38:27.783 END TEST nvmf_identify 00:38:27.783 ************************************ 00:38:27.783 20:37:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:38:27.783 20:37:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:38:27.783 20:37:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:27.783 20:37:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:27.783 ************************************ 00:38:27.783 START TEST nvmf_perf 00:38:27.783 ************************************ 00:38:27.783 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:38:28.043 * Looking for test storage... 00:38:28.043 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:28.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:28.043 --rc genhtml_branch_coverage=1 00:38:28.043 --rc genhtml_function_coverage=1 00:38:28.043 --rc genhtml_legend=1 00:38:28.043 --rc geninfo_all_blocks=1 00:38:28.043 --rc geninfo_unexecuted_blocks=1 00:38:28.043 00:38:28.043 ' 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:28.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:28.043 --rc genhtml_branch_coverage=1 00:38:28.043 --rc genhtml_function_coverage=1 00:38:28.043 --rc genhtml_legend=1 00:38:28.043 --rc geninfo_all_blocks=1 00:38:28.043 --rc geninfo_unexecuted_blocks=1 00:38:28.043 00:38:28.043 ' 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:28.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:28.043 --rc genhtml_branch_coverage=1 00:38:28.043 --rc genhtml_function_coverage=1 00:38:28.043 --rc genhtml_legend=1 00:38:28.043 --rc geninfo_all_blocks=1 00:38:28.043 --rc geninfo_unexecuted_blocks=1 00:38:28.043 00:38:28.043 ' 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:28.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:28.043 --rc genhtml_branch_coverage=1 00:38:28.043 --rc genhtml_function_coverage=1 00:38:28.043 --rc genhtml_legend=1 00:38:28.043 --rc geninfo_all_blocks=1 00:38:28.043 --rc geninfo_unexecuted_blocks=1 00:38:28.043 00:38:28.043 ' 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:28.043 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:28.044 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@458 -- # nvmf_veth_init 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:38:28.044 Cannot find device "nvmf_init_br" 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:38:28.044 Cannot find device "nvmf_init_br2" 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:38:28.044 Cannot find device "nvmf_tgt_br" 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:38:28.044 Cannot find device "nvmf_tgt_br2" 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:38:28.044 Cannot find device "nvmf_init_br" 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:38:28.044 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:38:28.304 Cannot find device "nvmf_init_br2" 00:38:28.304 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:38:28.304 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:38:28.304 Cannot find device "nvmf_tgt_br" 00:38:28.304 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:38:28.304 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:38:28.304 Cannot find device "nvmf_tgt_br2" 00:38:28.304 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:38:28.304 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:38:28.304 Cannot find device "nvmf_br" 00:38:28.304 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:38:28.304 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:38:28.304 Cannot find device "nvmf_init_if" 00:38:28.304 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:38:28.304 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:38:28.304 Cannot find device "nvmf_init_if2" 00:38:28.304 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:38:28.304 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:28.304 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:28.304 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:38:28.304 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:28.304 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:28.304 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:38:28.304 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:38:28.304 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:38:28.304 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:38:28.304 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:38:28.304 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:38:28.304 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:38:28.304 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:38:28.304 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:38:28.304 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:38:28.304 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:38:28.304 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:38:28.304 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:38:28.304 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:38:28.304 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:38:28.304 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:38:28.304 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:38:28.304 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:38:28.304 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:38:28.304 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:38:28.304 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:38:28.304 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:38:28.304 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:38:28.304 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:38:28.304 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:38:28.304 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:38:28.563 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:38:28.563 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:38:28.563 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:38:28.563 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:38:28.563 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:38:28.563 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:38:28.563 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:38:28.563 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:38:28.563 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:38:28.563 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:38:28.563 00:38:28.563 --- 10.0.0.3 ping statistics --- 00:38:28.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:28.563 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:38:28.563 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:38:28.563 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:38:28.563 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:38:28.563 00:38:28.563 --- 10.0.0.4 ping statistics --- 00:38:28.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:28.563 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:38:28.563 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:38:28.563 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:28.563 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:38:28.563 00:38:28.563 --- 10.0.0.1 ping statistics --- 00:38:28.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:28.563 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:38:28.563 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:38:28.563 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:28.563 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:38:28.563 00:38:28.563 --- 10.0.0.2 ping statistics --- 00:38:28.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:28.563 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:38:28.563 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:28.563 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # return 0 00:38:28.563 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:28.563 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:28.563 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:28.563 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:28.563 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:28.563 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:28.563 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:28.563 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:38:28.563 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:28.563 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:28.563 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:38:28.563 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=86845 00:38:28.563 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 86845 00:38:28.563 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 86845 ']' 00:38:28.563 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:38:28.563 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:28.563 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:28.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:28.563 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:28.563 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:28.563 20:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:38:28.563 [2024-10-01 20:37:23.695507] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:38:28.563 [2024-10-01 20:37:23.695585] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:28.823 [2024-10-01 20:37:23.828394] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:28.823 [2024-10-01 20:37:23.881782] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:28.823 [2024-10-01 20:37:23.881832] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:28.823 [2024-10-01 20:37:23.881857] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:28.823 [2024-10-01 20:37:23.881865] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:28.823 [2024-10-01 20:37:23.881871] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:28.823 [2024-10-01 20:37:23.882021] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:28.823 [2024-10-01 20:37:23.882201] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:38:28.823 [2024-10-01 20:37:23.883090] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:38:28.823 [2024-10-01 20:37:23.883157] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:28.823 20:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:28.823 20:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:38:28.823 20:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:28.823 20:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:28.823 20:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:38:28.823 20:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:28.823 20:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:38:28.823 20:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:38:29.390 20:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:38:29.390 20:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:38:29.649 20:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:38:29.649 20:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:29.908 20:37:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:38:29.908 20:37:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:38:29.908 20:37:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:38:29.908 20:37:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:38:29.908 20:37:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:38:30.167 [2024-10-01 20:37:25.268582] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:30.167 20:37:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:30.427 20:37:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:38:30.427 20:37:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:30.686 20:37:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:38:30.686 20:37:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:38:30.945 20:37:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:38:31.208 [2024-10-01 20:37:26.265850] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:38:31.208 20:37:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:38:31.511 20:37:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:38:31.511 20:37:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:38:31.511 20:37:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:38:31.512 20:37:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:38:32.473 Initializing NVMe Controllers 00:38:32.473 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:38:32.473 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:38:32.473 Initialization complete. Launching workers. 00:38:32.473 ======================================================== 00:38:32.473 Latency(us) 00:38:32.473 Device Information : IOPS MiB/s Average min max 00:38:32.473 PCIE (0000:00:10.0) NSID 1 from core 0: 22114.34 86.38 1446.31 392.18 8060.87 00:38:32.473 ======================================================== 00:38:32.473 Total : 22114.34 86.38 1446.31 392.18 8060.87 00:38:32.473 00:38:32.473 20:37:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:38:33.850 Initializing NVMe Controllers 00:38:33.850 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:38:33.850 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:38:33.850 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:38:33.850 Initialization complete. Launching workers. 00:38:33.850 ======================================================== 00:38:33.850 Latency(us) 00:38:33.850 Device Information : IOPS MiB/s Average min max 00:38:33.850 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3953.44 15.44 252.63 99.60 7219.06 00:38:33.850 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 121.52 0.47 8228.86 7014.71 15001.57 00:38:33.850 ======================================================== 00:38:33.850 Total : 4074.96 15.92 490.49 99.60 15001.57 00:38:33.850 00:38:33.850 20:37:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:38:35.224 Initializing NVMe Controllers 00:38:35.224 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:38:35.224 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:38:35.224 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:38:35.224 Initialization complete. Launching workers. 00:38:35.224 ======================================================== 00:38:35.224 Latency(us) 00:38:35.224 Device Information : IOPS MiB/s Average min max 00:38:35.224 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9784.60 38.22 3274.00 535.15 7340.56 00:38:35.224 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2734.62 10.68 11813.02 6018.27 18933.99 00:38:35.224 ======================================================== 00:38:35.224 Total : 12519.22 48.90 5139.21 535.15 18933.99 00:38:35.224 00:38:35.224 20:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:38:35.224 20:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:38:37.753 Initializing NVMe Controllers 00:38:37.753 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:38:37.753 Controller IO queue size 128, less than required. 00:38:37.753 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:37.753 Controller IO queue size 128, less than required. 00:38:37.753 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:37.753 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:38:37.753 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:38:37.753 Initialization complete. Launching workers. 00:38:37.753 ======================================================== 00:38:37.753 Latency(us) 00:38:37.753 Device Information : IOPS MiB/s Average min max 00:38:37.753 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1617.19 404.30 80366.87 49242.88 134251.21 00:38:37.753 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 609.44 152.36 222343.01 59506.08 353458.53 00:38:37.753 ======================================================== 00:38:37.753 Total : 2226.63 556.66 119226.51 49242.88 353458.53 00:38:37.753 00:38:38.011 20:37:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:38:38.011 Initializing NVMe Controllers 00:38:38.011 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:38:38.011 Controller IO queue size 128, less than required. 00:38:38.011 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:38.011 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:38:38.011 Controller IO queue size 128, less than required. 00:38:38.011 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:38.011 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:38:38.011 WARNING: Some requested NVMe devices were skipped 00:38:38.011 No valid NVMe controllers or AIO or URING devices found 00:38:38.011 20:37:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:38:40.544 Initializing NVMe Controllers 00:38:40.544 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:38:40.544 Controller IO queue size 128, less than required. 00:38:40.544 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:40.544 Controller IO queue size 128, less than required. 00:38:40.544 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:40.544 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:38:40.544 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:38:40.544 Initialization complete. Launching workers. 00:38:40.544 00:38:40.544 ==================== 00:38:40.544 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:38:40.544 TCP transport: 00:38:40.544 polls: 14490 00:38:40.544 idle_polls: 10305 00:38:40.544 sock_completions: 4185 00:38:40.544 nvme_completions: 4909 00:38:40.544 submitted_requests: 7324 00:38:40.544 queued_requests: 1 00:38:40.544 00:38:40.544 ==================== 00:38:40.544 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:38:40.544 TCP transport: 00:38:40.544 polls: 14459 00:38:40.545 idle_polls: 10793 00:38:40.545 sock_completions: 3666 00:38:40.545 nvme_completions: 6845 00:38:40.545 submitted_requests: 10402 00:38:40.545 queued_requests: 1 00:38:40.545 ======================================================== 00:38:40.545 Latency(us) 00:38:40.545 Device Information : IOPS MiB/s Average min max 00:38:40.545 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1225.03 306.26 107172.79 58612.98 351334.47 00:38:40.545 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1708.26 427.06 75726.50 43110.86 147339.98 00:38:40.545 ======================================================== 00:38:40.545 Total : 2933.29 733.32 88859.45 43110.86 351334.47 00:38:40.545 00:38:40.545 20:37:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:38:40.803 20:37:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:41.062 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:38:41.062 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:38:41.062 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:38:41.062 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:41.062 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:38:41.062 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:41.062 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:38:41.063 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:41.063 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:41.063 rmmod nvme_tcp 00:38:41.063 rmmod nvme_fabrics 00:38:41.063 rmmod nvme_keyring 00:38:41.063 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:41.063 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:38:41.063 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:38:41.063 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 86845 ']' 00:38:41.063 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 86845 00:38:41.063 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 86845 ']' 00:38:41.063 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 86845 00:38:41.063 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:38:41.063 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:41.063 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86845 00:38:41.063 killing process with pid 86845 00:38:41.063 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:41.063 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:41.063 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86845' 00:38:41.063 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 86845 00:38:41.063 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 86845 00:38:41.630 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:41.630 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:41.630 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:41.630 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:38:41.630 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:38:41.630 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:38:41.630 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:41.630 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:41.630 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:38:41.630 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:38:41.889 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:38:41.889 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:38:41.889 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:38:41.889 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:38:41.889 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:38:41.889 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:38:41.889 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:38:41.889 20:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:38:41.889 20:37:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:38:41.889 20:37:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:38:41.889 20:37:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:41.889 20:37:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:41.889 20:37:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:38:41.889 20:37:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:41.889 20:37:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:41.889 20:37:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:41.889 20:37:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:38:41.889 00:38:41.889 real 0m14.068s 00:38:41.889 user 0m50.579s 00:38:41.889 sys 0m3.426s 00:38:41.889 20:37:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:41.889 ************************************ 00:38:41.889 20:37:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:38:41.889 END TEST nvmf_perf 00:38:41.889 ************************************ 00:38:41.889 20:37:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:38:41.889 20:37:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:38:41.889 20:37:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:41.889 20:37:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:42.149 ************************************ 00:38:42.149 START TEST nvmf_fio_host 00:38:42.149 ************************************ 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:38:42.149 * Looking for test storage... 00:38:42.149 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:42.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:42.149 --rc genhtml_branch_coverage=1 00:38:42.149 --rc genhtml_function_coverage=1 00:38:42.149 --rc genhtml_legend=1 00:38:42.149 --rc geninfo_all_blocks=1 00:38:42.149 --rc geninfo_unexecuted_blocks=1 00:38:42.149 00:38:42.149 ' 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:42.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:42.149 --rc genhtml_branch_coverage=1 00:38:42.149 --rc genhtml_function_coverage=1 00:38:42.149 --rc genhtml_legend=1 00:38:42.149 --rc geninfo_all_blocks=1 00:38:42.149 --rc geninfo_unexecuted_blocks=1 00:38:42.149 00:38:42.149 ' 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:42.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:42.149 --rc genhtml_branch_coverage=1 00:38:42.149 --rc genhtml_function_coverage=1 00:38:42.149 --rc genhtml_legend=1 00:38:42.149 --rc geninfo_all_blocks=1 00:38:42.149 --rc geninfo_unexecuted_blocks=1 00:38:42.149 00:38:42.149 ' 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:42.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:42.149 --rc genhtml_branch_coverage=1 00:38:42.149 --rc genhtml_function_coverage=1 00:38:42.149 --rc genhtml_legend=1 00:38:42.149 --rc geninfo_all_blocks=1 00:38:42.149 --rc geninfo_unexecuted_blocks=1 00:38:42.149 00:38:42.149 ' 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:42.149 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:42.150 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@458 -- # nvmf_veth_init 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:38:42.150 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:38:42.151 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:38:42.151 Cannot find device "nvmf_init_br" 00:38:42.151 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:38:42.151 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:38:42.151 Cannot find device "nvmf_init_br2" 00:38:42.151 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:38:42.151 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:38:42.151 Cannot find device "nvmf_tgt_br" 00:38:42.151 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:38:42.151 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:38:42.151 Cannot find device "nvmf_tgt_br2" 00:38:42.151 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:38:42.151 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:38:42.151 Cannot find device "nvmf_init_br" 00:38:42.151 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:38:42.151 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:38:42.409 Cannot find device "nvmf_init_br2" 00:38:42.409 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:38:42.409 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:38:42.410 Cannot find device "nvmf_tgt_br" 00:38:42.410 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:38:42.410 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:38:42.410 Cannot find device "nvmf_tgt_br2" 00:38:42.410 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:38:42.410 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:38:42.410 Cannot find device "nvmf_br" 00:38:42.410 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:38:42.410 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:38:42.410 Cannot find device "nvmf_init_if" 00:38:42.410 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:38:42.410 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:38:42.410 Cannot find device "nvmf_init_if2" 00:38:42.410 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:38:42.410 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:42.410 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:42.410 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:38:42.410 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:42.410 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:42.410 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:38:42.410 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:38:42.410 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:38:42.410 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:38:42.410 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:38:42.410 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:38:42.410 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:38:42.410 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:38:42.410 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:38:42.410 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:38:42.410 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:38:42.410 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:38:42.410 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:38:42.410 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:38:42.410 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:38:42.410 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:38:42.410 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:38:42.410 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:38:42.410 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:38:42.410 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:38:42.410 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:38:42.410 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:38:42.410 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:38:42.410 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:38:42.410 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:38:42.410 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:38:42.410 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:38:42.410 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:38:42.410 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:38:42.669 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:38:42.669 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:38:42.669 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:38:42.669 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:38:42.669 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:38:42.669 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:38:42.669 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:38:42.669 00:38:42.669 --- 10.0.0.3 ping statistics --- 00:38:42.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:42.669 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:38:42.669 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:38:42.669 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:38:42.669 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:38:42.669 00:38:42.669 --- 10.0.0.4 ping statistics --- 00:38:42.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:42.669 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:38:42.669 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:38:42.669 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:42.669 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:38:42.669 00:38:42.669 --- 10.0.0.1 ping statistics --- 00:38:42.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:42.669 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:38:42.669 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:38:42.669 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:42.669 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:38:42.669 00:38:42.669 --- 10.0.0.2 ping statistics --- 00:38:42.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:42.669 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:38:42.669 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:42.669 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # return 0 00:38:42.669 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:42.669 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:42.669 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:42.669 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:42.669 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:42.669 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:42.669 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:42.669 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:38:42.669 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:38:42.669 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:42.669 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:38:42.669 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=87368 00:38:42.669 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:38:42.669 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:42.669 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 87368 00:38:42.669 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 87368 ']' 00:38:42.669 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:42.669 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:42.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:42.669 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:42.669 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:42.669 20:37:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:38:42.669 [2024-10-01 20:37:37.783007] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:38:42.669 [2024-10-01 20:37:37.783099] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:42.928 [2024-10-01 20:37:37.922771] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:42.928 [2024-10-01 20:37:37.995874] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:42.928 [2024-10-01 20:37:37.995940] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:42.928 [2024-10-01 20:37:37.995955] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:42.928 [2024-10-01 20:37:37.995965] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:42.928 [2024-10-01 20:37:37.995974] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:42.928 [2024-10-01 20:37:37.996141] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:42.928 [2024-10-01 20:37:37.996793] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:38:42.928 [2024-10-01 20:37:38.000147] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:38:42.928 [2024-10-01 20:37:38.000203] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:42.928 20:37:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:42.928 20:37:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:38:42.928 20:37:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:43.187 [2024-10-01 20:37:38.377480] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:43.187 20:37:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:38:43.187 20:37:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:43.187 20:37:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:38:43.187 20:37:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:38:43.755 Malloc1 00:38:43.755 20:37:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:44.012 20:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:38:44.271 20:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:38:44.529 [2024-10-01 20:37:39.597846] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:38:44.529 20:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:38:44.788 20:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:38:44.788 20:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:38:44.788 20:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:38:44.788 20:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:44.788 20:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:44.788 20:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:44.789 20:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:38:44.789 20:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:38:44.789 20:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:44.789 20:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:44.789 20:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:38:44.789 20:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:38:44.789 20:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:44.789 20:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:44.789 20:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:44.789 20:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:44.789 20:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:38:44.789 20:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:38:44.789 20:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:44.789 20:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:44.789 20:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:44.789 20:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:38:44.789 20:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:38:45.047 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:38:45.047 fio-3.35 00:38:45.047 Starting 1 thread 00:38:47.580 00:38:47.580 test: (groupid=0, jobs=1): err= 0: pid=87481: Tue Oct 1 20:37:42 2024 00:38:47.580 read: IOPS=9548, BW=37.3MiB/s (39.1MB/s)(74.9MiB/2007msec) 00:38:47.580 slat (nsec): min=1841, max=426566, avg=2523.73, stdev=4087.91 00:38:47.580 clat (usec): min=3496, max=12863, avg=6996.96, stdev=618.54 00:38:47.580 lat (usec): min=3556, max=12865, avg=6999.48, stdev=618.41 00:38:47.580 clat percentiles (usec): 00:38:47.580 | 1.00th=[ 5800], 5.00th=[ 6194], 10.00th=[ 6325], 20.00th=[ 6521], 00:38:47.580 | 30.00th=[ 6652], 40.00th=[ 6783], 50.00th=[ 6915], 60.00th=[ 7046], 00:38:47.580 | 70.00th=[ 7242], 80.00th=[ 7373], 90.00th=[ 7701], 95.00th=[ 7963], 00:38:47.580 | 99.00th=[ 9241], 99.50th=[ 9896], 99.90th=[10945], 99.95th=[11731], 00:38:47.580 | 99.99th=[12780] 00:38:47.580 bw ( KiB/s): min=36976, max=38976, per=100.00%, avg=38200.00, stdev=862.07, samples=4 00:38:47.580 iops : min= 9244, max= 9744, avg=9550.00, stdev=215.52, samples=4 00:38:47.580 write: IOPS=9559, BW=37.3MiB/s (39.2MB/s)(74.9MiB/2007msec); 0 zone resets 00:38:47.580 slat (nsec): min=1969, max=294335, avg=2613.20, stdev=2607.09 00:38:47.580 clat (usec): min=2649, max=12888, avg=6333.33, stdev=541.62 00:38:47.580 lat (usec): min=2663, max=12890, avg=6335.95, stdev=541.53 00:38:47.580 clat percentiles (usec): 00:38:47.580 | 1.00th=[ 5276], 5.00th=[ 5604], 10.00th=[ 5800], 20.00th=[ 5932], 00:38:47.580 | 30.00th=[ 6063], 40.00th=[ 6194], 50.00th=[ 6259], 60.00th=[ 6390], 00:38:47.580 | 70.00th=[ 6521], 80.00th=[ 6652], 90.00th=[ 6915], 95.00th=[ 7177], 00:38:47.580 | 99.00th=[ 7963], 99.50th=[ 8455], 99.90th=[10552], 99.95th=[11863], 00:38:47.580 | 99.99th=[12780] 00:38:47.580 bw ( KiB/s): min=37832, max=38720, per=100.00%, avg=38242.00, stdev=368.37, samples=4 00:38:47.580 iops : min= 9458, max= 9680, avg=9560.50, stdev=92.09, samples=4 00:38:47.580 lat (msec) : 4=0.07%, 10=99.64%, 20=0.29% 00:38:47.580 cpu : usr=67.50%, sys=23.93%, ctx=38, majf=0, minf=6 00:38:47.580 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:38:47.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:47.581 issued rwts: total=19163,19185,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.581 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:47.581 00:38:47.581 Run status group 0 (all jobs): 00:38:47.581 READ: bw=37.3MiB/s (39.1MB/s), 37.3MiB/s-37.3MiB/s (39.1MB/s-39.1MB/s), io=74.9MiB (78.5MB), run=2007-2007msec 00:38:47.581 WRITE: bw=37.3MiB/s (39.2MB/s), 37.3MiB/s-37.3MiB/s (39.2MB/s-39.2MB/s), io=74.9MiB (78.6MB), run=2007-2007msec 00:38:47.581 20:37:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:38:47.581 20:37:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:38:47.581 20:37:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:47.581 20:37:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:47.581 20:37:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:47.581 20:37:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:38:47.581 20:37:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:38:47.581 20:37:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:47.581 20:37:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:47.581 20:37:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:38:47.581 20:37:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:38:47.581 20:37:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:47.581 20:37:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:47.581 20:37:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:47.581 20:37:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:47.581 20:37:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:38:47.581 20:37:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:38:47.581 20:37:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:47.581 20:37:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:38:47.581 20:37:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:38:47.581 20:37:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:38:47.581 20:37:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:38:47.581 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:38:47.581 fio-3.35 00:38:47.581 Starting 1 thread 00:38:50.118 00:38:50.118 test: (groupid=0, jobs=1): err= 0: pid=87529: Tue Oct 1 20:37:44 2024 00:38:50.118 read: IOPS=8519, BW=133MiB/s (140MB/s)(267MiB/2005msec) 00:38:50.118 slat (usec): min=2, max=118, avg= 3.59, stdev= 2.36 00:38:50.118 clat (usec): min=2425, max=17380, avg=8703.87, stdev=2178.38 00:38:50.118 lat (usec): min=2429, max=17383, avg=8707.46, stdev=2178.54 00:38:50.118 clat percentiles (usec): 00:38:50.118 | 1.00th=[ 4424], 5.00th=[ 5407], 10.00th=[ 5997], 20.00th=[ 6783], 00:38:50.118 | 30.00th=[ 7439], 40.00th=[ 7963], 50.00th=[ 8586], 60.00th=[ 9372], 00:38:50.118 | 70.00th=[ 9896], 80.00th=[10290], 90.00th=[11338], 95.00th=[12387], 00:38:50.118 | 99.00th=[14484], 99.50th=[15270], 99.90th=[17171], 99.95th=[17171], 00:38:50.118 | 99.99th=[17433] 00:38:50.118 bw ( KiB/s): min=60928, max=86144, per=52.68%, avg=71808.00, stdev=11088.57, samples=4 00:38:50.118 iops : min= 3808, max= 5384, avg=4488.00, stdev=693.04, samples=4 00:38:50.118 write: IOPS=5216, BW=81.5MiB/s (85.5MB/s)(147MiB/1803msec); 0 zone resets 00:38:50.118 slat (usec): min=31, max=374, avg=37.02, stdev= 9.88 00:38:50.118 clat (usec): min=4216, max=18004, avg=10699.28, stdev=1832.55 00:38:50.118 lat (usec): min=4249, max=18037, avg=10736.30, stdev=1834.36 00:38:50.118 clat percentiles (usec): 00:38:50.118 | 1.00th=[ 7046], 5.00th=[ 7963], 10.00th=[ 8586], 20.00th=[ 9110], 00:38:50.118 | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[10552], 60.00th=[11076], 00:38:50.118 | 70.00th=[11469], 80.00th=[12256], 90.00th=[13173], 95.00th=[13960], 00:38:50.118 | 99.00th=[15795], 99.50th=[16450], 99.90th=[17695], 99.95th=[17957], 00:38:50.118 | 99.99th=[17957] 00:38:50.118 bw ( KiB/s): min=63616, max=90016, per=89.60%, avg=74792.00, stdev=11812.62, samples=4 00:38:50.118 iops : min= 3976, max= 5626, avg=4674.50, stdev=738.29, samples=4 00:38:50.118 lat (msec) : 4=0.26%, 10=59.25%, 20=40.49% 00:38:50.118 cpu : usr=75.00%, sys=16.87%, ctx=19, majf=0, minf=23 00:38:50.118 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:38:50.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:50.118 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:50.118 issued rwts: total=17081,9406,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:50.118 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:50.118 00:38:50.118 Run status group 0 (all jobs): 00:38:50.118 READ: bw=133MiB/s (140MB/s), 133MiB/s-133MiB/s (140MB/s-140MB/s), io=267MiB (280MB), run=2005-2005msec 00:38:50.118 WRITE: bw=81.5MiB/s (85.5MB/s), 81.5MiB/s-81.5MiB/s (85.5MB/s-85.5MB/s), io=147MiB (154MB), run=1803-1803msec 00:38:50.118 20:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:50.118 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:38:50.118 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:38:50.118 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:38:50.118 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:38:50.118 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:50.118 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:38:50.118 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:50.118 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:38:50.118 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:50.118 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:50.118 rmmod nvme_tcp 00:38:50.118 rmmod nvme_fabrics 00:38:50.118 rmmod nvme_keyring 00:38:50.118 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:50.118 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:38:50.118 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:38:50.118 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 87368 ']' 00:38:50.118 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 87368 00:38:50.118 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 87368 ']' 00:38:50.118 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 87368 00:38:50.118 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:38:50.118 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:50.118 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87368 00:38:50.118 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:50.118 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:50.118 killing process with pid 87368 00:38:50.118 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87368' 00:38:50.119 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 87368 00:38:50.119 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 87368 00:38:50.377 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:50.377 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:50.377 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:50.377 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:38:50.377 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:38:50.377 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:50.377 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:38:50.377 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:50.377 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:38:50.377 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:38:50.377 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:38:50.377 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:38:50.377 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:38:50.377 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:38:50.377 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:38:50.635 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:38:50.635 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:38:50.635 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:38:50.635 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:38:50.635 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:38:50.635 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:50.635 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:50.635 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:38:50.635 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:50.635 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:50.635 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:50.635 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:38:50.635 00:38:50.635 real 0m8.652s 00:38:50.635 user 0m34.485s 00:38:50.635 sys 0m2.337s 00:38:50.635 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:50.635 20:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:38:50.635 ************************************ 00:38:50.635 END TEST nvmf_fio_host 00:38:50.635 ************************************ 00:38:50.635 20:37:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:38:50.635 20:37:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:38:50.635 20:37:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:50.635 20:37:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:50.635 ************************************ 00:38:50.635 START TEST nvmf_failover 00:38:50.635 ************************************ 00:38:50.635 20:37:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:38:50.895 * Looking for test storage... 00:38:50.895 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:38:50.895 20:37:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:50.895 20:37:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:38:50.896 20:37:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:50.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.896 --rc genhtml_branch_coverage=1 00:38:50.896 --rc genhtml_function_coverage=1 00:38:50.896 --rc genhtml_legend=1 00:38:50.896 --rc geninfo_all_blocks=1 00:38:50.896 --rc geninfo_unexecuted_blocks=1 00:38:50.896 00:38:50.896 ' 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:50.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.896 --rc genhtml_branch_coverage=1 00:38:50.896 --rc genhtml_function_coverage=1 00:38:50.896 --rc genhtml_legend=1 00:38:50.896 --rc geninfo_all_blocks=1 00:38:50.896 --rc geninfo_unexecuted_blocks=1 00:38:50.896 00:38:50.896 ' 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:50.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.896 --rc genhtml_branch_coverage=1 00:38:50.896 --rc genhtml_function_coverage=1 00:38:50.896 --rc genhtml_legend=1 00:38:50.896 --rc geninfo_all_blocks=1 00:38:50.896 --rc geninfo_unexecuted_blocks=1 00:38:50.896 00:38:50.896 ' 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:50.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.896 --rc genhtml_branch_coverage=1 00:38:50.896 --rc genhtml_function_coverage=1 00:38:50.896 --rc genhtml_legend=1 00:38:50.896 --rc geninfo_all_blocks=1 00:38:50.896 --rc geninfo_unexecuted_blocks=1 00:38:50.896 00:38:50.896 ' 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:50.896 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:50.897 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@458 -- # nvmf_veth_init 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:38:50.897 Cannot find device "nvmf_init_br" 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:38:50.897 Cannot find device "nvmf_init_br2" 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:38:50.897 Cannot find device "nvmf_tgt_br" 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:38:50.897 Cannot find device "nvmf_tgt_br2" 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:38:50.897 Cannot find device "nvmf_init_br" 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:38:50.897 Cannot find device "nvmf_init_br2" 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:38:50.897 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:38:51.156 Cannot find device "nvmf_tgt_br" 00:38:51.156 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:38:51.156 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:38:51.156 Cannot find device "nvmf_tgt_br2" 00:38:51.156 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:38:51.156 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:38:51.156 Cannot find device "nvmf_br" 00:38:51.156 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:38:51.156 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:38:51.156 Cannot find device "nvmf_init_if" 00:38:51.156 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:38:51.156 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:38:51.156 Cannot find device "nvmf_init_if2" 00:38:51.156 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:38:51.156 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:51.156 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:51.156 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:38:51.156 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:51.156 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:51.156 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:38:51.156 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:38:51.156 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:38:51.156 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:38:51.156 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:38:51.156 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:38:51.156 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:38:51.156 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:38:51.156 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:38:51.156 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:38:51.156 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:38:51.156 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:38:51.156 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:38:51.156 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:38:51.156 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:38:51.156 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:38:51.156 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:38:51.156 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:38:51.156 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:38:51.156 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:38:51.156 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:38:51.156 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:38:51.157 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:38:51.157 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:38:51.157 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:38:51.416 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:38:51.416 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:38:51.416 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:38:51.416 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:38:51.416 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:38:51.416 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:38:51.416 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:38:51.416 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:38:51.416 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:38:51.416 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:38:51.416 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:38:51.416 00:38:51.416 --- 10.0.0.3 ping statistics --- 00:38:51.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:51.416 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:38:51.416 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:38:51.416 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:38:51.416 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.079 ms 00:38:51.416 00:38:51.416 --- 10.0.0.4 ping statistics --- 00:38:51.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:51.416 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:38:51.416 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:38:51.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:51.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:38:51.416 00:38:51.416 --- 10.0.0.1 ping statistics --- 00:38:51.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:51.416 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:38:51.416 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:38:51.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:51.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:38:51.416 00:38:51.416 --- 10.0.0.2 ping statistics --- 00:38:51.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:51.416 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:38:51.416 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:51.416 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # return 0 00:38:51.416 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:51.416 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:51.416 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:51.416 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:51.416 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:51.416 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:51.416 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:51.416 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:38:51.416 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:51.416 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:51.416 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:38:51.416 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=87805 00:38:51.416 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:38:51.416 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 87805 00:38:51.416 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 87805 ']' 00:38:51.416 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:51.416 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:51.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:51.416 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:51.416 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:51.416 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:38:51.416 [2024-10-01 20:37:46.565272] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:38:51.416 [2024-10-01 20:37:46.565363] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:51.676 [2024-10-01 20:37:46.704945] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:51.676 [2024-10-01 20:37:46.757034] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:51.676 [2024-10-01 20:37:46.757101] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:51.676 [2024-10-01 20:37:46.757120] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:51.676 [2024-10-01 20:37:46.757128] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:51.676 [2024-10-01 20:37:46.757134] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:51.676 [2024-10-01 20:37:46.757289] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:38:51.676 [2024-10-01 20:37:46.757448] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:51.676 [2024-10-01 20:37:46.757982] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:38:51.676 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:51.676 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:38:51.676 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:51.676 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:51.676 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:38:51.676 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:51.676 20:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:52.244 [2024-10-01 20:37:47.216519] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:52.244 20:37:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:38:52.504 Malloc0 00:38:52.504 20:37:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:52.763 20:37:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:53.022 20:37:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:38:53.022 [2024-10-01 20:37:48.244821] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:38:53.022 20:37:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:38:53.281 [2024-10-01 20:37:48.481054] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:38:53.281 20:37:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:38:53.553 [2024-10-01 20:37:48.721224] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:38:53.553 20:37:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=87899 00:38:53.553 20:37:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:38:53.553 20:37:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:53.553 20:37:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 87899 /var/tmp/bdevperf.sock 00:38:53.553 20:37:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 87899 ']' 00:38:53.553 20:37:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:53.553 20:37:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:53.553 20:37:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:53.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:53.553 20:37:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:53.553 20:37:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:38:54.943 20:37:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:54.943 20:37:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:38:54.943 20:37:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:38:54.943 NVMe0n1 00:38:54.943 20:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:38:55.201 00:38:55.201 20:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=87946 00:38:55.201 20:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:38:55.201 20:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:56.579 20:37:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:38:56.579 [2024-10-01 20:37:51.677699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1184c30 is same with the state(6) to be set 00:38:56.579 [2024-10-01 20:37:51.677759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1184c30 is same with the state(6) to be set 00:38:56.579 [2024-10-01 20:37:51.677770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1184c30 is same with the state(6) to be set 00:38:56.579 [2024-10-01 20:37:51.677777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1184c30 is same with the state(6) to be set 00:38:56.579 [2024-10-01 20:37:51.677784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1184c30 is same with the state(6) to be set 00:38:56.579 [2024-10-01 20:37:51.677791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1184c30 is same with the state(6) to be set 00:38:56.579 [2024-10-01 20:37:51.677799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1184c30 is same with the state(6) to be set 00:38:56.579 [2024-10-01 20:37:51.677805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1184c30 is same with the state(6) to be set 00:38:56.579 [2024-10-01 20:37:51.677812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1184c30 is same with the state(6) to be set 00:38:56.579 [2024-10-01 20:37:51.677819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1184c30 is same with the state(6) to be set 00:38:56.579 [2024-10-01 20:37:51.677826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1184c30 is same with the state(6) to be set 00:38:56.579 [2024-10-01 20:37:51.677833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1184c30 is same with the state(6) to be set 00:38:56.579 [2024-10-01 20:37:51.677840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1184c30 is same with the state(6) to be set 00:38:56.579 [2024-10-01 20:37:51.677847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1184c30 is same with the state(6) to be set 00:38:56.579 [2024-10-01 20:37:51.677855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1184c30 is same with the state(6) to be set 00:38:56.579 [2024-10-01 20:37:51.677862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1184c30 is same with the state(6) to be set 00:38:56.579 [2024-10-01 20:37:51.677869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1184c30 is same with the state(6) to be set 00:38:56.579 [2024-10-01 20:37:51.677875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1184c30 is same with the state(6) to be set 00:38:56.579 [2024-10-01 20:37:51.677882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1184c30 is same with the state(6) to be set 00:38:56.579 [2024-10-01 20:37:51.677889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1184c30 is same with the state(6) to be set 00:38:56.579 [2024-10-01 20:37:51.677895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1184c30 is same with the state(6) to be set 00:38:56.579 [2024-10-01 20:37:51.677902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1184c30 is same with the state(6) to be set 00:38:56.579 [2024-10-01 20:37:51.677911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1184c30 is same with the state(6) to be set 00:38:56.579 [2024-10-01 20:37:51.677918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1184c30 is same with the state(6) to be set 00:38:56.579 [2024-10-01 20:37:51.677924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1184c30 is same with the state(6) to be set 00:38:56.579 [2024-10-01 20:37:51.677931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1184c30 is same with the state(6) to be set 00:38:56.579 [2024-10-01 20:37:51.677938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1184c30 is same with the state(6) to be set 00:38:56.579 [2024-10-01 20:37:51.677945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1184c30 is same with the state(6) to be set 00:38:56.580 [2024-10-01 20:37:51.677951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1184c30 is same with the state(6) to be set 00:38:56.580 [2024-10-01 20:37:51.677958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1184c30 is same with the state(6) to be set 00:38:56.580 20:37:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:38:59.868 20:37:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:38:59.868 00:38:59.868 20:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:39:00.127 [2024-10-01 20:37:55.290043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185a00 is same with the state(6) to be set 00:39:00.127 [2024-10-01 20:37:55.290104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185a00 is same with the state(6) to be set 00:39:00.127 [2024-10-01 20:37:55.290142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185a00 is same with the state(6) to be set 00:39:00.127 [2024-10-01 20:37:55.290151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185a00 is same with the state(6) to be set 00:39:00.127 [2024-10-01 20:37:55.290158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185a00 is same with the state(6) to be set 00:39:00.127 [2024-10-01 20:37:55.290165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185a00 is same with the state(6) to be set 00:39:00.127 [2024-10-01 20:37:55.290173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185a00 is same with the state(6) to be set 00:39:00.127 [2024-10-01 20:37:55.290180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185a00 is same with the state(6) to be set 00:39:00.127 [2024-10-01 20:37:55.290187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185a00 is same with the state(6) to be set 00:39:00.127 [2024-10-01 20:37:55.290195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185a00 is same with the state(6) to be set 00:39:00.127 [2024-10-01 20:37:55.290202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185a00 is same with the state(6) to be set 00:39:00.127 [2024-10-01 20:37:55.290210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185a00 is same with the state(6) to be set 00:39:00.127 [2024-10-01 20:37:55.290217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185a00 is same with the state(6) to be set 00:39:00.127 [2024-10-01 20:37:55.290223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185a00 is same with the state(6) to be set 00:39:00.128 [2024-10-01 20:37:55.290230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185a00 is same with the state(6) to be set 00:39:00.128 [2024-10-01 20:37:55.290237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185a00 is same with the state(6) to be set 00:39:00.128 [2024-10-01 20:37:55.290244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185a00 is same with the state(6) to be set 00:39:00.128 [2024-10-01 20:37:55.290251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185a00 is same with the state(6) to be set 00:39:00.128 [2024-10-01 20:37:55.290259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185a00 is same with the state(6) to be set 00:39:00.128 [2024-10-01 20:37:55.290266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185a00 is same with the state(6) to be set 00:39:00.128 [2024-10-01 20:37:55.290272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185a00 is same with the state(6) to be set 00:39:00.128 [2024-10-01 20:37:55.290279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185a00 is same with the state(6) to be set 00:39:00.128 [2024-10-01 20:37:55.290286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185a00 is same with the state(6) to be set 00:39:00.128 [2024-10-01 20:37:55.290293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185a00 is same with the state(6) to be set 00:39:00.128 [2024-10-01 20:37:55.290300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185a00 is same with the state(6) to be set 00:39:00.128 [2024-10-01 20:37:55.290306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185a00 is same with the state(6) to be set 00:39:00.128 [2024-10-01 20:37:55.290313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185a00 is same with the state(6) to be set 00:39:00.128 [2024-10-01 20:37:55.290320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185a00 is same with the state(6) to be set 00:39:00.128 [2024-10-01 20:37:55.290327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185a00 is same with the state(6) to be set 00:39:00.128 [2024-10-01 20:37:55.290336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185a00 is same with the state(6) to be set 00:39:00.128 [2024-10-01 20:37:55.290344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185a00 is same with the state(6) to be set 00:39:00.128 [2024-10-01 20:37:55.290351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185a00 is same with the state(6) to be set 00:39:00.128 20:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:39:03.418 20:37:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:39:03.418 [2024-10-01 20:37:58.575698] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:39:03.418 20:37:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:39:04.356 20:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:39:04.925 [2024-10-01 20:37:59.887680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11865f0 is same with the state(6) to be set 00:39:04.925 [2024-10-01 20:37:59.887767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11865f0 is same with the state(6) to be set 00:39:04.925 [2024-10-01 20:37:59.887780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11865f0 is same with the state(6) to be set 00:39:04.925 [2024-10-01 20:37:59.887788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11865f0 is same with the state(6) to be set 00:39:04.925 [2024-10-01 20:37:59.887797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11865f0 is same with the state(6) to be set 00:39:04.925 [2024-10-01 20:37:59.887805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11865f0 is same with the state(6) to be set 00:39:04.925 [2024-10-01 20:37:59.887813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11865f0 is same with the state(6) to be set 00:39:04.925 [2024-10-01 20:37:59.887821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11865f0 is same with the state(6) to be set 00:39:04.925 [2024-10-01 20:37:59.887829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11865f0 is same with the state(6) to be set 00:39:04.925 [2024-10-01 20:37:59.887837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11865f0 is same with the state(6) to be set 00:39:04.925 [2024-10-01 20:37:59.887846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11865f0 is same with the state(6) to be set 00:39:04.925 [2024-10-01 20:37:59.887853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11865f0 is same with the state(6) to be set 00:39:04.925 [2024-10-01 20:37:59.887861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11865f0 is same with the state(6) to be set 00:39:04.925 [2024-10-01 20:37:59.887869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11865f0 is same with the state(6) to be set 00:39:04.925 [2024-10-01 20:37:59.887877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11865f0 is same with the state(6) to be set 00:39:04.925 20:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 87946 00:39:11.504 { 00:39:11.504 "results": [ 00:39:11.504 { 00:39:11.504 "job": "NVMe0n1", 00:39:11.504 "core_mask": "0x1", 00:39:11.504 "workload": "verify", 00:39:11.504 "status": "finished", 00:39:11.504 "verify_range": { 00:39:11.504 "start": 0, 00:39:11.504 "length": 16384 00:39:11.504 }, 00:39:11.504 "queue_depth": 128, 00:39:11.504 "io_size": 4096, 00:39:11.504 "runtime": 15.004936, 00:39:11.504 "iops": 10057.09054673742, 00:39:11.504 "mibps": 39.285509948193045, 00:39:11.504 "io_failed": 3165, 00:39:11.504 "io_timeout": 0, 00:39:11.504 "avg_latency_us": 12437.318734160932, 00:39:11.504 "min_latency_us": 532.48, 00:39:11.504 "max_latency_us": 22043.927272727273 00:39:11.504 } 00:39:11.504 ], 00:39:11.504 "core_count": 1 00:39:11.504 } 00:39:11.504 20:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 87899 00:39:11.504 20:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 87899 ']' 00:39:11.504 20:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 87899 00:39:11.504 20:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:39:11.504 20:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:11.504 20:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87899 00:39:11.504 killing process with pid 87899 00:39:11.504 20:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:11.504 20:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:11.504 20:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87899' 00:39:11.504 20:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 87899 00:39:11.504 20:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 87899 00:39:11.504 20:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:39:11.504 [2024-10-01 20:37:48.799675] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:39:11.504 [2024-10-01 20:37:48.799788] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87899 ] 00:39:11.504 [2024-10-01 20:37:48.942170] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:11.504 [2024-10-01 20:37:49.012030] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:11.505 Running I/O for 15 seconds... 00:39:11.505 10066.00 IOPS, 39.32 MiB/s [2024-10-01 20:37:51.678417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.505 [2024-10-01 20:37:51.678463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.505 [2024-10-01 20:37:51.678518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:94472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.505 [2024-10-01 20:37:51.678547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.505 [2024-10-01 20:37:51.678562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.505 [2024-10-01 20:37:51.678575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.505 [2024-10-01 20:37:51.678589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:94488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.505 [2024-10-01 20:37:51.678602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.505 [2024-10-01 20:37:51.678617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.505 [2024-10-01 20:37:51.678629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.505 [2024-10-01 20:37:51.678644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.505 [2024-10-01 20:37:51.678657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.505 [2024-10-01 20:37:51.678671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:94512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.505 [2024-10-01 20:37:51.678684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.505 [2024-10-01 20:37:51.678699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.505 [2024-10-01 20:37:51.678712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.505 [2024-10-01 20:37:51.678727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:94528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.505 [2024-10-01 20:37:51.678740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.505 [2024-10-01 20:37:51.678755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.505 [2024-10-01 20:37:51.678768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.505 [2024-10-01 20:37:51.678782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.505 [2024-10-01 20:37:51.678796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.505 [2024-10-01 20:37:51.678834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.505 [2024-10-01 20:37:51.678849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.505 [2024-10-01 20:37:51.678864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:94560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.505 [2024-10-01 20:37:51.678877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.505 [2024-10-01 20:37:51.678891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.505 [2024-10-01 20:37:51.678904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.505 [2024-10-01 20:37:51.678919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.505 [2024-10-01 20:37:51.678933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.505 [2024-10-01 20:37:51.678962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.505 [2024-10-01 20:37:51.678975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.505 [2024-10-01 20:37:51.678989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.505 [2024-10-01 20:37:51.679006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.505 [2024-10-01 20:37:51.679021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.505 [2024-10-01 20:37:51.679034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.505 [2024-10-01 20:37:51.679048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.505 [2024-10-01 20:37:51.679061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.505 [2024-10-01 20:37:51.679075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.505 [2024-10-01 20:37:51.679088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.505 [2024-10-01 20:37:51.679102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.505 [2024-10-01 20:37:51.679114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.505 [2024-10-01 20:37:51.679162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.505 [2024-10-01 20:37:51.679183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.505 [2024-10-01 20:37:51.679216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.505 [2024-10-01 20:37:51.679234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.505 [2024-10-01 20:37:51.679250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.505 [2024-10-01 20:37:51.679274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.505 [2024-10-01 20:37:51.679291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.505 [2024-10-01 20:37:51.679306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.505 [2024-10-01 20:37:51.679322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.505 [2024-10-01 20:37:51.679336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.505 [2024-10-01 20:37:51.679352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.505 [2024-10-01 20:37:51.679367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.505 [2024-10-01 20:37:51.679383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.505 [2024-10-01 20:37:51.679397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.505 [2024-10-01 20:37:51.679413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.505 [2024-10-01 20:37:51.679428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.505 [2024-10-01 20:37:51.679443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.505 [2024-10-01 20:37:51.679458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.505 [2024-10-01 20:37:51.679474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.505 [2024-10-01 20:37:51.679488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.505 [2024-10-01 20:37:51.679504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.505 [2024-10-01 20:37:51.679533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.505 [2024-10-01 20:37:51.679577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.505 [2024-10-01 20:37:51.679593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.505 [2024-10-01 20:37:51.679609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.506 [2024-10-01 20:37:51.679622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.506 [2024-10-01 20:37:51.679636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.506 [2024-10-01 20:37:51.679649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.506 [2024-10-01 20:37:51.679664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.506 [2024-10-01 20:37:51.679677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.506 [2024-10-01 20:37:51.679699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.506 [2024-10-01 20:37:51.679713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.506 [2024-10-01 20:37:51.679728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.506 [2024-10-01 20:37:51.679767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.506 [2024-10-01 20:37:51.679784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.506 [2024-10-01 20:37:51.679797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.506 [2024-10-01 20:37:51.679812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:94576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.506 [2024-10-01 20:37:51.679825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.506 [2024-10-01 20:37:51.679840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.506 [2024-10-01 20:37:51.679854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.506 [2024-10-01 20:37:51.679869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.506 [2024-10-01 20:37:51.679883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.506 [2024-10-01 20:37:51.679897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:94600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.506 [2024-10-01 20:37:51.679910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.506 [2024-10-01 20:37:51.679925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:94608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.506 [2024-10-01 20:37:51.679939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.506 [2024-10-01 20:37:51.679954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:94616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.506 [2024-10-01 20:37:51.679967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.506 [2024-10-01 20:37:51.679982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.506 [2024-10-01 20:37:51.679995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.506 [2024-10-01 20:37:51.680010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:94632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.506 [2024-10-01 20:37:51.680023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.506 [2024-10-01 20:37:51.680038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:94640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.506 [2024-10-01 20:37:51.680051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.506 [2024-10-01 20:37:51.680066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:94648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.506 [2024-10-01 20:37:51.680088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.506 [2024-10-01 20:37:51.680119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.506 [2024-10-01 20:37:51.680148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.506 [2024-10-01 20:37:51.680176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.506 [2024-10-01 20:37:51.680191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.506 [2024-10-01 20:37:51.680206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:94672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.506 [2024-10-01 20:37:51.680219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.506 [2024-10-01 20:37:51.680234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.506 [2024-10-01 20:37:51.680248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.506 [2024-10-01 20:37:51.680262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:94688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.506 [2024-10-01 20:37:51.680275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.506 [2024-10-01 20:37:51.680290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:94696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.506 [2024-10-01 20:37:51.680304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.506 [2024-10-01 20:37:51.680319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:94704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.506 [2024-10-01 20:37:51.680332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.506 [2024-10-01 20:37:51.680347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.506 [2024-10-01 20:37:51.680362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.506 [2024-10-01 20:37:51.680377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:94720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.506 [2024-10-01 20:37:51.680391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.506 [2024-10-01 20:37:51.680406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.506 [2024-10-01 20:37:51.680419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.506 [2024-10-01 20:37:51.680434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.506 [2024-10-01 20:37:51.680448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.506 [2024-10-01 20:37:51.680462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.506 [2024-10-01 20:37:51.680476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.506 [2024-10-01 20:37:51.680490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:94752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.506 [2024-10-01 20:37:51.680525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.506 [2024-10-01 20:37:51.680540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.506 [2024-10-01 20:37:51.680553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.506 [2024-10-01 20:37:51.680568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:94768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.506 [2024-10-01 20:37:51.680581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.506 [2024-10-01 20:37:51.680595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.506 [2024-10-01 20:37:51.680610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.506 [2024-10-01 20:37:51.680626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.506 [2024-10-01 20:37:51.680639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.506 [2024-10-01 20:37:51.680654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.506 [2024-10-01 20:37:51.680667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.507 [2024-10-01 20:37:51.680681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.507 [2024-10-01 20:37:51.680694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.507 [2024-10-01 20:37:51.680709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.507 [2024-10-01 20:37:51.680722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.507 [2024-10-01 20:37:51.680736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.507 [2024-10-01 20:37:51.680749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.507 [2024-10-01 20:37:51.680764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.507 [2024-10-01 20:37:51.680776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.507 [2024-10-01 20:37:51.680791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.507 [2024-10-01 20:37:51.680804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.507 [2024-10-01 20:37:51.680818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.507 [2024-10-01 20:37:51.680831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.507 [2024-10-01 20:37:51.680845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.507 [2024-10-01 20:37:51.680859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.507 [2024-10-01 20:37:51.680879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.507 [2024-10-01 20:37:51.680893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.507 [2024-10-01 20:37:51.680908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.507 [2024-10-01 20:37:51.680921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.507 [2024-10-01 20:37:51.680935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.507 [2024-10-01 20:37:51.680948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.507 [2024-10-01 20:37:51.680962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.507 [2024-10-01 20:37:51.680976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.507 [2024-10-01 20:37:51.680990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.507 [2024-10-01 20:37:51.681003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.507 [2024-10-01 20:37:51.681018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.507 [2024-10-01 20:37:51.681032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.507 [2024-10-01 20:37:51.681047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.507 [2024-10-01 20:37:51.681061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.507 [2024-10-01 20:37:51.681077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.507 [2024-10-01 20:37:51.681090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.507 [2024-10-01 20:37:51.681104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.507 [2024-10-01 20:37:51.681117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.507 [2024-10-01 20:37:51.681143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.507 [2024-10-01 20:37:51.681157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.507 [2024-10-01 20:37:51.681172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.507 [2024-10-01 20:37:51.681185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.507 [2024-10-01 20:37:51.681199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.507 [2024-10-01 20:37:51.681212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.507 [2024-10-01 20:37:51.681227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:94888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.507 [2024-10-01 20:37:51.681247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.507 [2024-10-01 20:37:51.681263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.507 [2024-10-01 20:37:51.681276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.507 [2024-10-01 20:37:51.681290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.507 [2024-10-01 20:37:51.681303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.507 [2024-10-01 20:37:51.681318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:94912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.507 [2024-10-01 20:37:51.681331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.507 [2024-10-01 20:37:51.681346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.507 [2024-10-01 20:37:51.681359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.507 [2024-10-01 20:37:51.681374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.507 [2024-10-01 20:37:51.681387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.507 [2024-10-01 20:37:51.681401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.507 [2024-10-01 20:37:51.681414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.507 [2024-10-01 20:37:51.681428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.507 [2024-10-01 20:37:51.681442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.507 [2024-10-01 20:37:51.681456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.507 [2024-10-01 20:37:51.681469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.507 [2024-10-01 20:37:51.681484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.507 [2024-10-01 20:37:51.681496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.507 [2024-10-01 20:37:51.681516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.507 [2024-10-01 20:37:51.681531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.507 [2024-10-01 20:37:51.681546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.507 [2024-10-01 20:37:51.681559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.507 [2024-10-01 20:37:51.681574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.507 [2024-10-01 20:37:51.681587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.507 [2024-10-01 20:37:51.681608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.507 [2024-10-01 20:37:51.681622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.507 [2024-10-01 20:37:51.681636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.508 [2024-10-01 20:37:51.681649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.508 [2024-10-01 20:37:51.681664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.508 [2024-10-01 20:37:51.681677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.508 [2024-10-01 20:37:51.681691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.508 [2024-10-01 20:37:51.681704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.508 [2024-10-01 20:37:51.681718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.508 [2024-10-01 20:37:51.681731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.508 [2024-10-01 20:37:51.681746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.508 [2024-10-01 20:37:51.681758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.508 [2024-10-01 20:37:51.681773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.508 [2024-10-01 20:37:51.681786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.508 [2024-10-01 20:37:51.681800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.508 [2024-10-01 20:37:51.681813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.508 [2024-10-01 20:37:51.681828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.508 [2024-10-01 20:37:51.681840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.508 [2024-10-01 20:37:51.681855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.508 [2024-10-01 20:37:51.681868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.508 [2024-10-01 20:37:51.681883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.508 [2024-10-01 20:37:51.681896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.508 [2024-10-01 20:37:51.681910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.508 [2024-10-01 20:37:51.681923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.508 [2024-10-01 20:37:51.681937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.508 [2024-10-01 20:37:51.681956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.508 [2024-10-01 20:37:51.681974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.508 [2024-10-01 20:37:51.681990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.508 [2024-10-01 20:37:51.682005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.508 [2024-10-01 20:37:51.682018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.508 [2024-10-01 20:37:51.682033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.508 [2024-10-01 20:37:51.682046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.508 [2024-10-01 20:37:51.682061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.508 [2024-10-01 20:37:51.682074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.508 [2024-10-01 20:37:51.682088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.508 [2024-10-01 20:37:51.682101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.508 [2024-10-01 20:37:51.682126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.508 [2024-10-01 20:37:51.682140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.508 [2024-10-01 20:37:51.682155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.508 [2024-10-01 20:37:51.682168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.508 [2024-10-01 20:37:51.682182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.508 [2024-10-01 20:37:51.682196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.508 [2024-10-01 20:37:51.682210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.508 [2024-10-01 20:37:51.682223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.508 [2024-10-01 20:37:51.682237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.508 [2024-10-01 20:37:51.682250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.508 [2024-10-01 20:37:51.682265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.508 [2024-10-01 20:37:51.682277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.508 [2024-10-01 20:37:51.682292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.508 [2024-10-01 20:37:51.682305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.508 [2024-10-01 20:37:51.682319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.508 [2024-10-01 20:37:51.682339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.508 [2024-10-01 20:37:51.682355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.508 [2024-10-01 20:37:51.682368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.508 [2024-10-01 20:37:51.682383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.508 [2024-10-01 20:37:51.682396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.508 [2024-10-01 20:37:51.682426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:11.508 [2024-10-01 20:37:51.682440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:11.508 [2024-10-01 20:37:51.682452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95480 len:8 PRP1 0x0 PRP2 0x0 00:39:11.508 [2024-10-01 20:37:51.682467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.508 [2024-10-01 20:37:51.682513] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1673170 was disconnected and freed. reset controller. 00:39:11.508 [2024-10-01 20:37:51.682530] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:39:11.508 [2024-10-01 20:37:51.682580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:39:11.508 [2024-10-01 20:37:51.682601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.508 [2024-10-01 20:37:51.682616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:39:11.508 [2024-10-01 20:37:51.682629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.508 [2024-10-01 20:37:51.682643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:39:11.508 [2024-10-01 20:37:51.682655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.508 [2024-10-01 20:37:51.682668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:39:11.508 [2024-10-01 20:37:51.682681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.508 [2024-10-01 20:37:51.682694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:11.508 [2024-10-01 20:37:51.686355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:11.509 [2024-10-01 20:37:51.686390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16005f0 (9): Bad file descriptor 00:39:11.509 [2024-10-01 20:37:51.723296] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:39:11.509 9932.50 IOPS, 38.80 MiB/s 10012.33 IOPS, 39.11 MiB/s 10066.50 IOPS, 39.32 MiB/s [2024-10-01 20:37:55.291285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.509 [2024-10-01 20:37:55.291329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.509 [2024-10-01 20:37:55.291354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.509 [2024-10-01 20:37:55.291392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.509 [2024-10-01 20:37:55.291408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.509 [2024-10-01 20:37:55.291421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.509 [2024-10-01 20:37:55.291435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.509 [2024-10-01 20:37:55.291448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.509 [2024-10-01 20:37:55.291463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.509 [2024-10-01 20:37:55.291475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.509 [2024-10-01 20:37:55.291489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.509 [2024-10-01 20:37:55.291501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.509 [2024-10-01 20:37:55.291515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.509 [2024-10-01 20:37:55.291527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.509 [2024-10-01 20:37:55.291541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.509 [2024-10-01 20:37:55.291554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.509 [2024-10-01 20:37:55.291567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.509 [2024-10-01 20:37:55.291580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.509 [2024-10-01 20:37:55.291594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.509 [2024-10-01 20:37:55.291606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.509 [2024-10-01 20:37:55.291620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.509 [2024-10-01 20:37:55.291632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.509 [2024-10-01 20:37:55.291646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.509 [2024-10-01 20:37:55.291658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.509 [2024-10-01 20:37:55.291672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.509 [2024-10-01 20:37:55.291684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.509 [2024-10-01 20:37:55.291698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.509 [2024-10-01 20:37:55.291710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.509 [2024-10-01 20:37:55.291724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.509 [2024-10-01 20:37:55.291770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.509 [2024-10-01 20:37:55.291787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.509 [2024-10-01 20:37:55.291800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.509 [2024-10-01 20:37:55.291815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.509 [2024-10-01 20:37:55.291829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.509 [2024-10-01 20:37:55.291844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.509 [2024-10-01 20:37:55.291857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.509 [2024-10-01 20:37:55.291872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.509 [2024-10-01 20:37:55.291885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.509 [2024-10-01 20:37:55.291900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.509 [2024-10-01 20:37:55.291913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.509 [2024-10-01 20:37:55.291929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.509 [2024-10-01 20:37:55.291942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.509 [2024-10-01 20:37:55.291956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.509 [2024-10-01 20:37:55.291970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.509 [2024-10-01 20:37:55.291984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.509 [2024-10-01 20:37:55.291997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.509 [2024-10-01 20:37:55.292012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.509 [2024-10-01 20:37:55.292025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.509 [2024-10-01 20:37:55.292040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.509 [2024-10-01 20:37:55.292053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.509 [2024-10-01 20:37:55.292082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.509 [2024-10-01 20:37:55.292109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.509 [2024-10-01 20:37:55.292122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.509 [2024-10-01 20:37:55.292135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.509 [2024-10-01 20:37:55.292169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.509 [2024-10-01 20:37:55.292185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.509 [2024-10-01 20:37:55.292199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.510 [2024-10-01 20:37:55.292212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.510 [2024-10-01 20:37:55.292225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.510 [2024-10-01 20:37:55.292237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.510 [2024-10-01 20:37:55.292251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.510 [2024-10-01 20:37:55.292263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.510 [2024-10-01 20:37:55.292278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.510 [2024-10-01 20:37:55.292290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.510 [2024-10-01 20:37:55.292304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.510 [2024-10-01 20:37:55.292320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.510 [2024-10-01 20:37:55.292335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.510 [2024-10-01 20:37:55.292348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.510 [2024-10-01 20:37:55.292363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.510 [2024-10-01 20:37:55.292375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.510 [2024-10-01 20:37:55.292389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.510 [2024-10-01 20:37:55.292401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.510 [2024-10-01 20:37:55.292415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.510 [2024-10-01 20:37:55.292427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.510 [2024-10-01 20:37:55.292440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.510 [2024-10-01 20:37:55.292453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.510 [2024-10-01 20:37:55.292466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.510 [2024-10-01 20:37:55.292479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.510 [2024-10-01 20:37:55.292492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.510 [2024-10-01 20:37:55.292511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.510 [2024-10-01 20:37:55.292526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.510 [2024-10-01 20:37:55.292538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.510 [2024-10-01 20:37:55.292552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.510 [2024-10-01 20:37:55.292565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.510 [2024-10-01 20:37:55.292578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.510 [2024-10-01 20:37:55.292590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.510 [2024-10-01 20:37:55.292604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.510 [2024-10-01 20:37:55.292617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.510 [2024-10-01 20:37:55.292630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.510 [2024-10-01 20:37:55.292643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.510 [2024-10-01 20:37:55.292656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.510 [2024-10-01 20:37:55.292669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.510 [2024-10-01 20:37:55.292682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.510 [2024-10-01 20:37:55.292694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.510 [2024-10-01 20:37:55.292708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.510 [2024-10-01 20:37:55.292722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.510 [2024-10-01 20:37:55.292736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.510 [2024-10-01 20:37:55.292750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.510 [2024-10-01 20:37:55.292764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.510 [2024-10-01 20:37:55.292777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.510 [2024-10-01 20:37:55.292791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.510 [2024-10-01 20:37:55.292803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.510 [2024-10-01 20:37:55.292817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.510 [2024-10-01 20:37:55.292830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.510 [2024-10-01 20:37:55.292850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.510 [2024-10-01 20:37:55.292864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.510 [2024-10-01 20:37:55.292878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.510 [2024-10-01 20:37:55.292890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.510 [2024-10-01 20:37:55.292904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.510 [2024-10-01 20:37:55.292917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.510 [2024-10-01 20:37:55.292931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.510 [2024-10-01 20:37:55.292943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.510 [2024-10-01 20:37:55.292957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.510 [2024-10-01 20:37:55.292970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.510 [2024-10-01 20:37:55.292983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.510 [2024-10-01 20:37:55.292996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.510 [2024-10-01 20:37:55.293009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.510 [2024-10-01 20:37:55.293022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.510 [2024-10-01 20:37:55.293035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.510 [2024-10-01 20:37:55.293047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.510 [2024-10-01 20:37:55.293061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.510 [2024-10-01 20:37:55.293073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.511 [2024-10-01 20:37:55.293086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.511 [2024-10-01 20:37:55.293099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.511 [2024-10-01 20:37:55.293123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.511 [2024-10-01 20:37:55.293137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.511 [2024-10-01 20:37:55.293150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.511 [2024-10-01 20:37:55.293163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.511 [2024-10-01 20:37:55.293176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.511 [2024-10-01 20:37:55.293191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.511 [2024-10-01 20:37:55.293211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.511 [2024-10-01 20:37:55.293225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.511 [2024-10-01 20:37:55.293238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.511 [2024-10-01 20:37:55.293251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.511 [2024-10-01 20:37:55.293265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.511 [2024-10-01 20:37:55.293278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.511 [2024-10-01 20:37:55.293291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.511 [2024-10-01 20:37:55.293303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.511 [2024-10-01 20:37:55.293318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.511 [2024-10-01 20:37:55.293330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.511 [2024-10-01 20:37:55.293344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.511 [2024-10-01 20:37:55.293356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.511 [2024-10-01 20:37:55.293370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.511 [2024-10-01 20:37:55.293383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.511 [2024-10-01 20:37:55.293397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.511 [2024-10-01 20:37:55.293409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.511 [2024-10-01 20:37:55.293423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.511 [2024-10-01 20:37:55.293435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.511 [2024-10-01 20:37:55.293449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.511 [2024-10-01 20:37:55.293461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.511 [2024-10-01 20:37:55.293475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.511 [2024-10-01 20:37:55.293487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.511 [2024-10-01 20:37:55.293502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.511 [2024-10-01 20:37:55.293514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.511 [2024-10-01 20:37:55.293528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.511 [2024-10-01 20:37:55.293547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.511 [2024-10-01 20:37:55.293561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.511 [2024-10-01 20:37:55.293574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.511 [2024-10-01 20:37:55.293587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.511 [2024-10-01 20:37:55.293600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.511 [2024-10-01 20:37:55.293614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.511 [2024-10-01 20:37:55.293629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.511 [2024-10-01 20:37:55.293643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.511 [2024-10-01 20:37:55.293656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.511 [2024-10-01 20:37:55.293670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.511 [2024-10-01 20:37:55.293682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.511 [2024-10-01 20:37:55.293696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.511 [2024-10-01 20:37:55.293709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.511 [2024-10-01 20:37:55.293722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.511 [2024-10-01 20:37:55.293735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.511 [2024-10-01 20:37:55.293749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.511 [2024-10-01 20:37:55.293761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.511 [2024-10-01 20:37:55.293775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.511 [2024-10-01 20:37:55.293787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.511 [2024-10-01 20:37:55.293800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.511 [2024-10-01 20:37:55.293813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.511 [2024-10-01 20:37:55.293827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.511 [2024-10-01 20:37:55.293839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.511 [2024-10-01 20:37:55.293853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.511 [2024-10-01 20:37:55.293865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.511 [2024-10-01 20:37:55.293879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.511 [2024-10-01 20:37:55.293897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.511 [2024-10-01 20:37:55.293912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.511 [2024-10-01 20:37:55.293925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.511 [2024-10-01 20:37:55.293939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.511 [2024-10-01 20:37:55.293952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.511 [2024-10-01 20:37:55.293965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.511 [2024-10-01 20:37:55.293978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.511 [2024-10-01 20:37:55.293991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.511 [2024-10-01 20:37:55.294004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.512 [2024-10-01 20:37:55.294035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:11.512 [2024-10-01 20:37:55.294049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1656 len:8 PRP1 0x0 PRP2 0x0 00:39:11.512 [2024-10-01 20:37:55.294062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.512 [2024-10-01 20:37:55.294080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:11.512 [2024-10-01 20:37:55.294090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:11.512 [2024-10-01 20:37:55.294100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:8 PRP1 0x0 PRP2 0x0 00:39:11.512 [2024-10-01 20:37:55.294141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.512 [2024-10-01 20:37:55.294155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:11.512 [2024-10-01 20:37:55.294165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:11.512 [2024-10-01 20:37:55.294179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1672 len:8 PRP1 0x0 PRP2 0x0 00:39:11.512 [2024-10-01 20:37:55.294192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.512 [2024-10-01 20:37:55.294205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:11.512 [2024-10-01 20:37:55.294214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:11.512 [2024-10-01 20:37:55.294224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1680 len:8 PRP1 0x0 PRP2 0x0 00:39:11.512 [2024-10-01 20:37:55.294236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.512 [2024-10-01 20:37:55.294248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:11.512 [2024-10-01 20:37:55.294257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:11.512 [2024-10-01 20:37:55.294267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1688 len:8 PRP1 0x0 PRP2 0x0 00:39:11.512 [2024-10-01 20:37:55.294279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.512 [2024-10-01 20:37:55.294300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:11.512 [2024-10-01 20:37:55.294310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:11.512 [2024-10-01 20:37:55.294320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:8 PRP1 0x0 PRP2 0x0 00:39:11.512 [2024-10-01 20:37:55.294332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.512 [2024-10-01 20:37:55.294345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:11.512 [2024-10-01 20:37:55.294354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:11.512 [2024-10-01 20:37:55.294363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1704 len:8 PRP1 0x0 PRP2 0x0 00:39:11.512 [2024-10-01 20:37:55.294376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.512 [2024-10-01 20:37:55.294388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:11.512 [2024-10-01 20:37:55.294397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:11.512 [2024-10-01 20:37:55.294407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1712 len:8 PRP1 0x0 PRP2 0x0 00:39:11.512 [2024-10-01 20:37:55.294419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.512 [2024-10-01 20:37:55.294431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:11.512 [2024-10-01 20:37:55.294440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:11.512 [2024-10-01 20:37:55.294450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1720 len:8 PRP1 0x0 PRP2 0x0 00:39:11.512 [2024-10-01 20:37:55.294462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.512 [2024-10-01 20:37:55.294477] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:11.512 [2024-10-01 20:37:55.294486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:11.512 [2024-10-01 20:37:55.294496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:8 PRP1 0x0 PRP2 0x0 00:39:11.512 [2024-10-01 20:37:55.294522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.512 [2024-10-01 20:37:55.294534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:11.512 [2024-10-01 20:37:55.294543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:11.512 [2024-10-01 20:37:55.294554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1736 len:8 PRP1 0x0 PRP2 0x0 00:39:11.512 [2024-10-01 20:37:55.294567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.512 [2024-10-01 20:37:55.294579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:11.512 [2024-10-01 20:37:55.294588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:11.512 [2024-10-01 20:37:55.294597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1744 len:8 PRP1 0x0 PRP2 0x0 00:39:11.512 [2024-10-01 20:37:55.294609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.512 [2024-10-01 20:37:55.294621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:11.512 [2024-10-01 20:37:55.294630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:11.512 [2024-10-01 20:37:55.294639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1752 len:8 PRP1 0x0 PRP2 0x0 00:39:11.512 [2024-10-01 20:37:55.294657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.512 [2024-10-01 20:37:55.294670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:11.512 [2024-10-01 20:37:55.294679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:11.512 [2024-10-01 20:37:55.294689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:8 PRP1 0x0 PRP2 0x0 00:39:11.512 [2024-10-01 20:37:55.294700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.512 [2024-10-01 20:37:55.294712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:11.512 [2024-10-01 20:37:55.294721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:11.512 [2024-10-01 20:37:55.294731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1768 len:8 PRP1 0x0 PRP2 0x0 00:39:11.512 [2024-10-01 20:37:55.294742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.512 [2024-10-01 20:37:55.294754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:11.512 [2024-10-01 20:37:55.294763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:11.512 [2024-10-01 20:37:55.294773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1776 len:8 PRP1 0x0 PRP2 0x0 00:39:11.512 [2024-10-01 20:37:55.294785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.512 [2024-10-01 20:37:55.294797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:11.512 [2024-10-01 20:37:55.294806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:11.512 [2024-10-01 20:37:55.294815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1784 len:8 PRP1 0x0 PRP2 0x0 00:39:11.512 [2024-10-01 20:37:55.294827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.512 [2024-10-01 20:37:55.294843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:11.512 [2024-10-01 20:37:55.294853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:11.512 [2024-10-01 20:37:55.294862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:8 PRP1 0x0 PRP2 0x0 00:39:11.512 [2024-10-01 20:37:55.294874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.512 [2024-10-01 20:37:55.294886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:11.512 [2024-10-01 20:37:55.294895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:11.512 [2024-10-01 20:37:55.294906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1800 len:8 PRP1 0x0 PRP2 0x0 00:39:11.512 [2024-10-01 20:37:55.294918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.512 [2024-10-01 20:37:55.294931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:11.512 [2024-10-01 20:37:55.294940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:11.512 [2024-10-01 20:37:55.294949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1808 len:8 PRP1 0x0 PRP2 0x0 00:39:11.512 [2024-10-01 20:37:55.294961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.512 [2024-10-01 20:37:55.294974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:11.512 [2024-10-01 20:37:55.294983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:11.513 [2024-10-01 20:37:55.295001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1816 len:8 PRP1 0x0 PRP2 0x0 00:39:11.513 [2024-10-01 20:37:55.295014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.513 [2024-10-01 20:37:55.295027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:11.513 [2024-10-01 20:37:55.295036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:11.513 [2024-10-01 20:37:55.295045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:8 PRP1 0x0 PRP2 0x0 00:39:11.513 [2024-10-01 20:37:55.295056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.513 [2024-10-01 20:37:55.295068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:11.513 [2024-10-01 20:37:55.295078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:11.513 [2024-10-01 20:37:55.295087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1832 len:8 PRP1 0x0 PRP2 0x0 00:39:11.513 [2024-10-01 20:37:55.295099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.513 [2024-10-01 20:37:55.295111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:11.513 [2024-10-01 20:37:55.295120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:11.513 [2024-10-01 20:37:55.295129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1840 len:8 PRP1 0x0 PRP2 0x0 00:39:11.513 [2024-10-01 20:37:55.295150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.513 [2024-10-01 20:37:55.295164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:11.513 [2024-10-01 20:37:55.295173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:11.513 [2024-10-01 20:37:55.295182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1848 len:8 PRP1 0x0 PRP2 0x0 00:39:11.513 [2024-10-01 20:37:55.295194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.513 [2024-10-01 20:37:55.295208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:11.513 [2024-10-01 20:37:55.295218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:11.513 [2024-10-01 20:37:55.295227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:8 PRP1 0x0 PRP2 0x0 00:39:11.513 [2024-10-01 20:37:55.295239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.513 [2024-10-01 20:37:55.295251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:11.513 [2024-10-01 20:37:55.295260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:11.513 [2024-10-01 20:37:55.295271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1864 len:8 PRP1 0x0 PRP2 0x0 00:39:11.513 [2024-10-01 20:37:55.295283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.513 [2024-10-01 20:37:55.295295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:11.513 [2024-10-01 20:37:55.295304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:11.513 [2024-10-01 20:37:55.295313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1872 len:8 PRP1 0x0 PRP2 0x0 00:39:11.513 [2024-10-01 20:37:55.295325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.513 [2024-10-01 20:37:55.295337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:11.513 [2024-10-01 20:37:55.295352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:11.513 [2024-10-01 20:37:55.295362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1880 len:8 PRP1 0x0 PRP2 0x0 00:39:11.513 [2024-10-01 20:37:55.295374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.513 [2024-10-01 20:37:55.295386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:11.513 [2024-10-01 20:37:55.295395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:11.513 [2024-10-01 20:37:55.295405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:8 PRP1 0x0 PRP2 0x0 00:39:11.513 [2024-10-01 20:37:55.295416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.513 [2024-10-01 20:37:55.295428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:11.513 [2024-10-01 20:37:55.295437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:11.513 [2024-10-01 20:37:55.295446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1896 len:8 PRP1 0x0 PRP2 0x0 00:39:11.513 [2024-10-01 20:37:55.295458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.513 [2024-10-01 20:37:55.295470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:11.513 [2024-10-01 20:37:55.306870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:11.513 [2024-10-01 20:37:55.306912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1904 len:8 PRP1 0x0 PRP2 0x0 00:39:11.513 [2024-10-01 20:37:55.306933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.513 [2024-10-01 20:37:55.306957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:11.513 [2024-10-01 20:37:55.306971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:11.513 [2024-10-01 20:37:55.306985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1912 len:8 PRP1 0x0 PRP2 0x0 00:39:11.513 [2024-10-01 20:37:55.307003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.513 [2024-10-01 20:37:55.307062] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1675240 was disconnected and freed. reset controller. 00:39:11.513 [2024-10-01 20:37:55.307085] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:39:11.513 [2024-10-01 20:37:55.307178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:39:11.513 [2024-10-01 20:37:55.307209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.513 [2024-10-01 20:37:55.307231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:39:11.513 [2024-10-01 20:37:55.307249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.513 [2024-10-01 20:37:55.307269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:39:11.513 [2024-10-01 20:37:55.307287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.513 [2024-10-01 20:37:55.307305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:39:11.513 [2024-10-01 20:37:55.307323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.513 [2024-10-01 20:37:55.307360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:11.513 [2024-10-01 20:37:55.307427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16005f0 (9): Bad file descriptor 00:39:11.513 [2024-10-01 20:37:55.312578] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:11.513 [2024-10-01 20:37:55.345328] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:39:11.513 9950.60 IOPS, 38.87 MiB/s 9980.50 IOPS, 38.99 MiB/s 10011.00 IOPS, 39.11 MiB/s 10026.50 IOPS, 39.17 MiB/s 10022.22 IOPS, 39.15 MiB/s [2024-10-01 20:37:59.887962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:113256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.513 [2024-10-01 20:37:59.888014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.513 [2024-10-01 20:37:59.888041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:113264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.513 [2024-10-01 20:37:59.888057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.513 [2024-10-01 20:37:59.888088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.513 [2024-10-01 20:37:59.888103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.513 [2024-10-01 20:37:59.888130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.513 [2024-10-01 20:37:59.888164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.513 [2024-10-01 20:37:59.888184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:113288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.513 [2024-10-01 20:37:59.888198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.513 [2024-10-01 20:37:59.888214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:113296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.514 [2024-10-01 20:37:59.888228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.514 [2024-10-01 20:37:59.888245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:113304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.514 [2024-10-01 20:37:59.888259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.514 [2024-10-01 20:37:59.888275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.514 [2024-10-01 20:37:59.888289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.514 [2024-10-01 20:37:59.888305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:113320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.514 [2024-10-01 20:37:59.888319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.514 [2024-10-01 20:37:59.888336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:113328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.514 [2024-10-01 20:37:59.888350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.514 [2024-10-01 20:37:59.888366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:113336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.514 [2024-10-01 20:37:59.888405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.514 [2024-10-01 20:37:59.888423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.514 [2024-10-01 20:37:59.888438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.514 [2024-10-01 20:37:59.888468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:113352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.514 [2024-10-01 20:37:59.888497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.514 [2024-10-01 20:37:59.888526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.514 [2024-10-01 20:37:59.888538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.514 [2024-10-01 20:37:59.888553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.514 [2024-10-01 20:37:59.888567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.514 [2024-10-01 20:37:59.888581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.514 [2024-10-01 20:37:59.888594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.514 [2024-10-01 20:37:59.888609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:113848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.514 [2024-10-01 20:37:59.888622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.514 [2024-10-01 20:37:59.888637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:113856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.514 [2024-10-01 20:37:59.888664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.514 [2024-10-01 20:37:59.888678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:113864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.514 [2024-10-01 20:37:59.888690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.514 [2024-10-01 20:37:59.888703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:113872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.514 [2024-10-01 20:37:59.888716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.514 [2024-10-01 20:37:59.888729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:113880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.514 [2024-10-01 20:37:59.888741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.514 [2024-10-01 20:37:59.888755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:113888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.514 [2024-10-01 20:37:59.888767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.514 [2024-10-01 20:37:59.888781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:113896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.514 [2024-10-01 20:37:59.888793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.514 [2024-10-01 20:37:59.888828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.514 [2024-10-01 20:37:59.888843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.514 [2024-10-01 20:37:59.888856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.514 [2024-10-01 20:37:59.888869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.514 [2024-10-01 20:37:59.888883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.514 [2024-10-01 20:37:59.888895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.514 [2024-10-01 20:37:59.888909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.514 [2024-10-01 20:37:59.888922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.514 [2024-10-01 20:37:59.888936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:113416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.514 [2024-10-01 20:37:59.888949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.514 [2024-10-01 20:37:59.888963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:113424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.514 [2024-10-01 20:37:59.888975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.514 [2024-10-01 20:37:59.888990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:113432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.514 [2024-10-01 20:37:59.889002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.514 [2024-10-01 20:37:59.889016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:113440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.514 [2024-10-01 20:37:59.889029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.514 [2024-10-01 20:37:59.889042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.514 [2024-10-01 20:37:59.889055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.514 [2024-10-01 20:37:59.889069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.514 [2024-10-01 20:37:59.889082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.514 [2024-10-01 20:37:59.889096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:113464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.514 [2024-10-01 20:37:59.889108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.514 [2024-10-01 20:37:59.889139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:113472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.514 [2024-10-01 20:37:59.889169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.514 [2024-10-01 20:37:59.889184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.514 [2024-10-01 20:37:59.889218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.514 [2024-10-01 20:37:59.889241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:113488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.514 [2024-10-01 20:37:59.889255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.514 [2024-10-01 20:37:59.889271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:113496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.514 [2024-10-01 20:37:59.889285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.514 [2024-10-01 20:37:59.889301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.514 [2024-10-01 20:37:59.889316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.514 [2024-10-01 20:37:59.889332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:113512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.515 [2024-10-01 20:37:59.889346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.515 [2024-10-01 20:37:59.889362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:113520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.515 [2024-10-01 20:37:59.889376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.515 [2024-10-01 20:37:59.889391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.515 [2024-10-01 20:37:59.889405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.515 [2024-10-01 20:37:59.889421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:113536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.515 [2024-10-01 20:37:59.889435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.515 [2024-10-01 20:37:59.889450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.515 [2024-10-01 20:37:59.889464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.515 [2024-10-01 20:37:59.889480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:113552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.515 [2024-10-01 20:37:59.889494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.515 [2024-10-01 20:37:59.889524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.515 [2024-10-01 20:37:59.889567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.515 [2024-10-01 20:37:59.889581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:113568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.515 [2024-10-01 20:37:59.889594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.515 [2024-10-01 20:37:59.889609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:113576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.515 [2024-10-01 20:37:59.889622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.515 [2024-10-01 20:37:59.889644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:113584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.515 [2024-10-01 20:37:59.889658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.515 [2024-10-01 20:37:59.889672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:113592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.515 [2024-10-01 20:37:59.889699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.515 [2024-10-01 20:37:59.889713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.515 [2024-10-01 20:37:59.889725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.515 [2024-10-01 20:37:59.889739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.515 [2024-10-01 20:37:59.889751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.515 [2024-10-01 20:37:59.889765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:113616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.515 [2024-10-01 20:37:59.889778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.515 [2024-10-01 20:37:59.889792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:113624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.515 [2024-10-01 20:37:59.889804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.515 [2024-10-01 20:37:59.889818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.515 [2024-10-01 20:37:59.889830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.515 [2024-10-01 20:37:59.889844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:113640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.515 [2024-10-01 20:37:59.889857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.515 [2024-10-01 20:37:59.889870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:113648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.515 [2024-10-01 20:37:59.889883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.515 [2024-10-01 20:37:59.889897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.515 [2024-10-01 20:37:59.889909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.515 [2024-10-01 20:37:59.889923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:113664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.515 [2024-10-01 20:37:59.889935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.515 [2024-10-01 20:37:59.889949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.515 [2024-10-01 20:37:59.889962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.515 [2024-10-01 20:37:59.889976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:113680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.515 [2024-10-01 20:37:59.889994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.515 [2024-10-01 20:37:59.890009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:113688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.515 [2024-10-01 20:37:59.890022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.515 [2024-10-01 20:37:59.890036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:113696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.515 [2024-10-01 20:37:59.890050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.515 [2024-10-01 20:37:59.890063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.515 [2024-10-01 20:37:59.890076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.515 [2024-10-01 20:37:59.890090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:113912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.515 [2024-10-01 20:37:59.890102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.515 [2024-10-01 20:37:59.890115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:113920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.515 [2024-10-01 20:37:59.890160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.515 [2024-10-01 20:37:59.890176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.515 [2024-10-01 20:37:59.890190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.515 [2024-10-01 20:37:59.890220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:113936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.515 [2024-10-01 20:37:59.890236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.515 [2024-10-01 20:37:59.890251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.516 [2024-10-01 20:37:59.890266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.516 [2024-10-01 20:37:59.890281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:113952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.516 [2024-10-01 20:37:59.890295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.516 [2024-10-01 20:37:59.890311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:113960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.516 [2024-10-01 20:37:59.890325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.516 [2024-10-01 20:37:59.890341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:113968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.516 [2024-10-01 20:37:59.890355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.516 [2024-10-01 20:37:59.890376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.516 [2024-10-01 20:37:59.890390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.516 [2024-10-01 20:37:59.890406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:113984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.516 [2024-10-01 20:37:59.890427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.516 [2024-10-01 20:37:59.890449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:113992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.516 [2024-10-01 20:37:59.890465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.516 [2024-10-01 20:37:59.890481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:114000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.516 [2024-10-01 20:37:59.890495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.516 [2024-10-01 20:37:59.890525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.516 [2024-10-01 20:37:59.890554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.516 [2024-10-01 20:37:59.890583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:114016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.516 [2024-10-01 20:37:59.890596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.516 [2024-10-01 20:37:59.890610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.516 [2024-10-01 20:37:59.890623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.516 [2024-10-01 20:37:59.890636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:114032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.516 [2024-10-01 20:37:59.890649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.516 [2024-10-01 20:37:59.890663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:114040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.516 [2024-10-01 20:37:59.890675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.516 [2024-10-01 20:37:59.890689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:114048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.516 [2024-10-01 20:37:59.890701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.516 [2024-10-01 20:37:59.890715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:114056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.516 [2024-10-01 20:37:59.890727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.516 [2024-10-01 20:37:59.890741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:114064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.516 [2024-10-01 20:37:59.890754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.516 [2024-10-01 20:37:59.890768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:114072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.516 [2024-10-01 20:37:59.890780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.516 [2024-10-01 20:37:59.890794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:114080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.516 [2024-10-01 20:37:59.890806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.516 [2024-10-01 20:37:59.890827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:114088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.516 [2024-10-01 20:37:59.890840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.516 [2024-10-01 20:37:59.890854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:114096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.516 [2024-10-01 20:37:59.890866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.516 [2024-10-01 20:37:59.890880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.516 [2024-10-01 20:37:59.890893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.516 [2024-10-01 20:37:59.890906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:114112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.516 [2024-10-01 20:37:59.890919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.516 [2024-10-01 20:37:59.890935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:114120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.516 [2024-10-01 20:37:59.890948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.516 [2024-10-01 20:37:59.890962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.516 [2024-10-01 20:37:59.890975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.516 [2024-10-01 20:37:59.890988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:114136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.516 [2024-10-01 20:37:59.891001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.516 [2024-10-01 20:37:59.891015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:114144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.516 [2024-10-01 20:37:59.891027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.516 [2024-10-01 20:37:59.891041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.516 [2024-10-01 20:37:59.891054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.516 [2024-10-01 20:37:59.891068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:114160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.516 [2024-10-01 20:37:59.891080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.516 [2024-10-01 20:37:59.891094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:114168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.516 [2024-10-01 20:37:59.891107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.516 [2024-10-01 20:37:59.891136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.516 [2024-10-01 20:37:59.891149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.516 [2024-10-01 20:37:59.891163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.516 [2024-10-01 20:37:59.891194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.516 [2024-10-01 20:37:59.891211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:114192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.516 [2024-10-01 20:37:59.891224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.516 [2024-10-01 20:37:59.891238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.516 [2024-10-01 20:37:59.891251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.516 [2024-10-01 20:37:59.891265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:114208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.516 [2024-10-01 20:37:59.891277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.516 [2024-10-01 20:37:59.891292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:114216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.517 [2024-10-01 20:37:59.891304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.517 [2024-10-01 20:37:59.891318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:114224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.517 [2024-10-01 20:37:59.891331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.517 [2024-10-01 20:37:59.891345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:114232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.517 [2024-10-01 20:37:59.891358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.517 [2024-10-01 20:37:59.891372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:114240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.517 [2024-10-01 20:37:59.891384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.517 [2024-10-01 20:37:59.891401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:114248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.517 [2024-10-01 20:37:59.891414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.517 [2024-10-01 20:37:59.891428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:114256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.517 [2024-10-01 20:37:59.891440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.517 [2024-10-01 20:37:59.891455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:114264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.517 [2024-10-01 20:37:59.891467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.517 [2024-10-01 20:37:59.891495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:114272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:11.517 [2024-10-01 20:37:59.891508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.517 [2024-10-01 20:37:59.891521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:113704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.517 [2024-10-01 20:37:59.891534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.517 [2024-10-01 20:37:59.891554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:113712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.517 [2024-10-01 20:37:59.891568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.517 [2024-10-01 20:37:59.891581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.517 [2024-10-01 20:37:59.891594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.517 [2024-10-01 20:37:59.891607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:113728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.517 [2024-10-01 20:37:59.891620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.517 [2024-10-01 20:37:59.891634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:113736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.517 [2024-10-01 20:37:59.891646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.517 [2024-10-01 20:37:59.891659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:113744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.517 [2024-10-01 20:37:59.891672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.517 [2024-10-01 20:37:59.891685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:113752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.517 [2024-10-01 20:37:59.891698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.517 [2024-10-01 20:37:59.891711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:113760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.517 [2024-10-01 20:37:59.891724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.517 [2024-10-01 20:37:59.891737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:113768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.517 [2024-10-01 20:37:59.891777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.517 [2024-10-01 20:37:59.891792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:113776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.517 [2024-10-01 20:37:59.891806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.517 [2024-10-01 20:37:59.891820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:113784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.517 [2024-10-01 20:37:59.891833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.517 [2024-10-01 20:37:59.891847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:113792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.517 [2024-10-01 20:37:59.891860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.517 [2024-10-01 20:37:59.891878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:113800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.517 [2024-10-01 20:37:59.891892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.517 [2024-10-01 20:37:59.891906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:113808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.517 [2024-10-01 20:37:59.891923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.517 [2024-10-01 20:37:59.891942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:113816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.517 [2024-10-01 20:37:59.891955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.517 [2024-10-01 20:37:59.891970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:113824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.517 [2024-10-01 20:37:59.891983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.517 [2024-10-01 20:37:59.891997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:113832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.517 [2024-10-01 20:37:59.892011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.517 [2024-10-01 20:37:59.892044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:11.517 [2024-10-01 20:37:59.892058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:11.517 [2024-10-01 20:37:59.892080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113840 len:8 PRP1 0x0 PRP2 0x0 00:39:11.517 [2024-10-01 20:37:59.892107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.517 [2024-10-01 20:37:59.892172] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1681790 was disconnected and freed. reset controller. 00:39:11.517 [2024-10-01 20:37:59.892192] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:39:11.517 [2024-10-01 20:37:59.892242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:39:11.517 [2024-10-01 20:37:59.892262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.517 [2024-10-01 20:37:59.892276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:39:11.517 [2024-10-01 20:37:59.892289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.517 [2024-10-01 20:37:59.892302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:39:11.517 [2024-10-01 20:37:59.892314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.517 [2024-10-01 20:37:59.892328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:39:11.517 [2024-10-01 20:37:59.892341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:11.517 [2024-10-01 20:37:59.892354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:11.517 [2024-10-01 20:37:59.892384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16005f0 (9): Bad file descriptor 00:39:11.517 [2024-10-01 20:37:59.895876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:11.517 [2024-10-01 20:37:59.927724] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:39:11.517 9981.30 IOPS, 38.99 MiB/s 9995.36 IOPS, 39.04 MiB/s 10015.50 IOPS, 39.12 MiB/s 10030.85 IOPS, 39.18 MiB/s 10040.57 IOPS, 39.22 MiB/s 00:39:11.517 Latency(us) 00:39:11.517 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:11.518 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:11.518 Verification LBA range: start 0x0 length 0x4000 00:39:11.518 NVMe0n1 : 15.00 10057.09 39.29 210.93 0.00 12437.32 532.48 22043.93 00:39:11.518 =================================================================================================================== 00:39:11.518 Total : 10057.09 39.29 210.93 0.00 12437.32 532.48 22043.93 00:39:11.518 Received shutdown signal, test time was about 15.000000 seconds 00:39:11.518 00:39:11.518 Latency(us) 00:39:11.518 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:11.518 =================================================================================================================== 00:39:11.518 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:11.518 20:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:39:11.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:11.518 20:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:39:11.518 20:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:39:11.518 20:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=88150 00:39:11.518 20:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 88150 /var/tmp/bdevperf.sock 00:39:11.518 20:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:39:11.518 20:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 88150 ']' 00:39:11.518 20:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:11.518 20:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:11.518 20:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:11.518 20:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:11.518 20:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:39:11.776 20:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:11.776 20:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:39:11.776 20:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:39:12.035 [2024-10-01 20:38:07.038398] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:39:12.035 20:38:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:39:12.294 [2024-10-01 20:38:07.334597] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:39:12.294 20:38:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:12.552 NVMe0n1 00:39:12.552 20:38:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:12.810 00:39:12.810 20:38:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:13.069 00:39:13.069 20:38:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:39:13.069 20:38:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:39:13.637 20:38:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:13.637 20:38:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:39:16.924 20:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:39:16.924 20:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:39:16.924 20:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=88293 00:39:16.924 20:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:16.924 20:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 88293 00:39:18.300 { 00:39:18.300 "results": [ 00:39:18.300 { 00:39:18.300 "job": "NVMe0n1", 00:39:18.300 "core_mask": "0x1", 00:39:18.300 "workload": "verify", 00:39:18.300 "status": "finished", 00:39:18.300 "verify_range": { 00:39:18.300 "start": 0, 00:39:18.300 "length": 16384 00:39:18.300 }, 00:39:18.300 "queue_depth": 128, 00:39:18.300 "io_size": 4096, 00:39:18.300 "runtime": 1.008098, 00:39:18.300 "iops": 10146.830962862738, 00:39:18.300 "mibps": 39.63605844868257, 00:39:18.300 "io_failed": 0, 00:39:18.300 "io_timeout": 0, 00:39:18.300 "avg_latency_us": 12556.457108932713, 00:39:18.300 "min_latency_us": 2040.5527272727272, 00:39:18.300 "max_latency_us": 13941.294545454546 00:39:18.300 } 00:39:18.300 ], 00:39:18.300 "core_count": 1 00:39:18.300 } 00:39:18.300 20:38:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:39:18.300 [2024-10-01 20:38:05.830550] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:39:18.300 [2024-10-01 20:38:05.831161] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88150 ] 00:39:18.300 [2024-10-01 20:38:05.962487] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:18.300 [2024-10-01 20:38:06.016981] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:18.300 [2024-10-01 20:38:08.809870] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:39:18.300 [2024-10-01 20:38:08.809981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:39:18.300 [2024-10-01 20:38:08.810006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.300 [2024-10-01 20:38:08.810022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:39:18.300 [2024-10-01 20:38:08.810035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.300 [2024-10-01 20:38:08.810049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:39:18.300 [2024-10-01 20:38:08.810061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.300 [2024-10-01 20:38:08.810075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:39:18.300 [2024-10-01 20:38:08.810088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:18.301 [2024-10-01 20:38:08.810101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:18.301 [2024-10-01 20:38:08.810169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:18.301 [2024-10-01 20:38:08.810214] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb575f0 (9): Bad file descriptor 00:39:18.301 [2024-10-01 20:38:08.821994] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:39:18.301 Running I/O for 1 seconds... 00:39:18.301 10083.00 IOPS, 39.39 MiB/s 00:39:18.301 Latency(us) 00:39:18.301 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:18.301 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:18.301 Verification LBA range: start 0x0 length 0x4000 00:39:18.301 NVMe0n1 : 1.01 10146.83 39.64 0.00 0.00 12556.46 2040.55 13941.29 00:39:18.301 =================================================================================================================== 00:39:18.301 Total : 10146.83 39.64 0.00 0.00 12556.46 2040.55 13941.29 00:39:18.301 20:38:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:39:18.301 20:38:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:39:18.301 20:38:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:18.869 20:38:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:39:18.869 20:38:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:39:18.869 20:38:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:19.129 20:38:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:39:22.419 20:38:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:39:22.419 20:38:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:39:22.419 20:38:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 88150 00:39:22.419 20:38:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 88150 ']' 00:39:22.419 20:38:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 88150 00:39:22.419 20:38:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:39:22.419 20:38:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:22.419 20:38:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88150 00:39:22.419 killing process with pid 88150 00:39:22.419 20:38:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:22.419 20:38:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:22.419 20:38:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88150' 00:39:22.419 20:38:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 88150 00:39:22.419 20:38:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 88150 00:39:22.679 20:38:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:39:22.679 20:38:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:22.938 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:39:22.938 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:39:22.938 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:39:22.938 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:39:22.938 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:39:22.938 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:22.938 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:39:22.938 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:22.938 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:22.938 rmmod nvme_tcp 00:39:22.938 rmmod nvme_fabrics 00:39:22.938 rmmod nvme_keyring 00:39:22.938 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:22.938 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:39:22.938 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:39:22.938 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 87805 ']' 00:39:22.938 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 87805 00:39:22.938 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 87805 ']' 00:39:22.938 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 87805 00:39:22.938 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:39:22.938 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:22.938 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87805 00:39:23.197 killing process with pid 87805 00:39:23.197 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:23.197 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:23.197 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87805' 00:39:23.197 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 87805 00:39:23.197 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 87805 00:39:23.197 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:39:23.197 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:39:23.197 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:39:23.197 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:39:23.197 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:39:23.197 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:39:23.197 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:39:23.197 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:23.197 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:39:23.197 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:39:23.197 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:39:23.197 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:39:23.197 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:39:23.458 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:39:23.458 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:39:23.458 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:39:23.458 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:39:23.458 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:39:23.458 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:39:23.458 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:39:23.458 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:23.458 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:23.458 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:39:23.458 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:23.458 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:23.458 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:23.458 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:39:23.458 00:39:23.458 real 0m32.774s 00:39:23.458 user 2m7.566s 00:39:23.458 sys 0m4.432s 00:39:23.458 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:23.458 20:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:39:23.458 ************************************ 00:39:23.458 END TEST nvmf_failover 00:39:23.458 ************************************ 00:39:23.458 20:38:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:39:23.458 20:38:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:39:23.458 20:38:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:23.458 20:38:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:39:23.458 ************************************ 00:39:23.458 START TEST nvmf_host_discovery 00:39:23.458 ************************************ 00:39:23.458 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:39:23.718 * Looking for test storage... 00:39:23.718 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:39:23.718 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:23.718 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:39:23.718 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:23.718 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:23.718 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:23.718 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:23.718 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:23.718 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:39:23.718 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:39:23.718 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:39:23.718 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:39:23.718 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:39:23.718 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:39:23.718 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:39:23.718 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:23.718 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:39:23.718 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:39:23.718 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:23.718 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:23.718 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:39:23.718 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:39:23.718 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:23.718 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:39:23.718 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:23.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:23.719 --rc genhtml_branch_coverage=1 00:39:23.719 --rc genhtml_function_coverage=1 00:39:23.719 --rc genhtml_legend=1 00:39:23.719 --rc geninfo_all_blocks=1 00:39:23.719 --rc geninfo_unexecuted_blocks=1 00:39:23.719 00:39:23.719 ' 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:23.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:23.719 --rc genhtml_branch_coverage=1 00:39:23.719 --rc genhtml_function_coverage=1 00:39:23.719 --rc genhtml_legend=1 00:39:23.719 --rc geninfo_all_blocks=1 00:39:23.719 --rc geninfo_unexecuted_blocks=1 00:39:23.719 00:39:23.719 ' 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:23.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:23.719 --rc genhtml_branch_coverage=1 00:39:23.719 --rc genhtml_function_coverage=1 00:39:23.719 --rc genhtml_legend=1 00:39:23.719 --rc geninfo_all_blocks=1 00:39:23.719 --rc geninfo_unexecuted_blocks=1 00:39:23.719 00:39:23.719 ' 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:23.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:23.719 --rc genhtml_branch_coverage=1 00:39:23.719 --rc genhtml_function_coverage=1 00:39:23.719 --rc genhtml_legend=1 00:39:23.719 --rc geninfo_all_blocks=1 00:39:23.719 --rc geninfo_unexecuted_blocks=1 00:39:23.719 00:39:23.719 ' 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:23.719 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:39:23.719 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@458 -- # nvmf_veth_init 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:39:23.720 Cannot find device "nvmf_init_br" 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:39:23.720 Cannot find device "nvmf_init_br2" 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:39:23.720 Cannot find device "nvmf_tgt_br" 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:39:23.720 Cannot find device "nvmf_tgt_br2" 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:39:23.720 Cannot find device "nvmf_init_br" 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:39:23.720 Cannot find device "nvmf_init_br2" 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:39:23.720 Cannot find device "nvmf_tgt_br" 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:39:23.720 Cannot find device "nvmf_tgt_br2" 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:39:23.720 Cannot find device "nvmf_br" 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:39:23.720 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:39:23.984 Cannot find device "nvmf_init_if" 00:39:23.984 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:39:23.984 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:39:23.984 Cannot find device "nvmf_init_if2" 00:39:23.984 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:39:23.984 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:23.984 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:23.984 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:39:23.984 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:23.984 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:23.984 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:39:23.984 20:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:39:23.984 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:39:23.984 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:39:23.984 00:39:23.984 --- 10.0.0.3 ping statistics --- 00:39:23.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:23.984 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:39:23.984 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:39:23.984 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:39:23.984 00:39:23.984 --- 10.0.0.4 ping statistics --- 00:39:23.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:23.984 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:39:23.984 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:23.984 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:39:23.984 00:39:23.984 --- 10.0.0.1 ping statistics --- 00:39:23.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:23.984 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:39:23.984 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:23.984 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:39:23.984 00:39:23.984 --- 10.0.0.2 ping statistics --- 00:39:23.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:23.984 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # return 0 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:39:23.984 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:39:24.243 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:39:24.243 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:39:24.243 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:24.243 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:24.243 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:39:24.243 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # nvmfpid=88658 00:39:24.243 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # waitforlisten 88658 00:39:24.243 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 88658 ']' 00:39:24.243 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:24.243 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:24.243 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:24.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:24.243 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:24.243 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:24.243 [2024-10-01 20:38:19.296187] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:39:24.243 [2024-10-01 20:38:19.296265] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:24.243 [2024-10-01 20:38:19.424741] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:24.243 [2024-10-01 20:38:19.474268] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:24.243 [2024-10-01 20:38:19.474331] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:24.243 [2024-10-01 20:38:19.474342] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:24.243 [2024-10-01 20:38:19.474350] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:24.243 [2024-10-01 20:38:19.474356] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:24.243 [2024-10-01 20:38:19.474382] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:24.503 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:24.503 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:39:24.503 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:39:24.503 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:24.503 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:24.503 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:24.503 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:24.503 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:24.503 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:24.503 [2024-10-01 20:38:19.644693] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:24.503 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:24.503 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:39:24.503 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:24.503 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:24.503 [2024-10-01 20:38:19.652821] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:39:24.503 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:24.503 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:39:24.503 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:24.503 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:24.503 null0 00:39:24.503 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:24.503 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:39:24.503 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:24.503 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:24.503 null1 00:39:24.503 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:24.503 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:39:24.503 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:24.503 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:24.503 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:24.503 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=88693 00:39:24.503 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:39:24.503 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 88693 /tmp/host.sock 00:39:24.503 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 88693 ']' 00:39:24.503 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:39:24.503 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:39:24.503 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:24.503 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:39:24.503 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:24.503 20:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:24.503 [2024-10-01 20:38:19.743312] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:39:24.503 [2024-10-01 20:38:19.743409] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88693 ] 00:39:24.762 [2024-10-01 20:38:19.885537] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:24.762 [2024-10-01 20:38:19.955476] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:25.021 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:25.021 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:39:25.021 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:25.021 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:39:25.021 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.021 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:25.021 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:25.021 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:39:25.021 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.021 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:25.021 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:25.021 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:39:25.021 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:39:25.021 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:39:25.021 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:39:25.021 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.021 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:39:25.021 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:25.021 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:39:25.021 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:25.021 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:39:25.021 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:39:25.021 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:39:25.021 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.021 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:25.021 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:39:25.021 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:39:25.021 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:39:25.021 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:25.021 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:39:25.021 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:39:25.021 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.021 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:25.022 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:25.022 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:39:25.022 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:39:25.022 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.022 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:39:25.022 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:25.022 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:39:25.022 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:39:25.022 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:25.281 [2024-10-01 20:38:20.465022] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:39:25.281 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:39:25.541 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:25.541 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:39:25.541 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:39:25.541 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:39:25.541 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:39:25.541 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:39:25.541 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:39:25.541 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:39:25.541 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:39:25.541 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:39:25.541 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:39:25.541 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:39:25.541 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.541 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:25.541 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:25.541 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:39:25.541 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:39:25.541 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:39:25.541 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:39:25.541 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:39:25.541 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.541 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:25.541 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:25.541 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:39:25.541 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:39:25.541 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:39:25.541 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:39:25.541 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:39:25.541 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:39:25.541 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:39:25.541 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.541 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:25.541 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:39:25.541 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:39:25.541 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:39:25.541 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:25.541 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:39:25.541 20:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:39:26.108 [2024-10-01 20:38:21.117557] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:39:26.108 [2024-10-01 20:38:21.117602] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:39:26.108 [2024-10-01 20:38:21.117621] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:39:26.108 [2024-10-01 20:38:21.203688] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:39:26.108 [2024-10-01 20:38:21.260001] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:39:26.108 [2024-10-01 20:38:21.260046] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.700 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:27.009 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:27.009 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:39:27.009 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:39:27.009 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:39:27.009 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:39:27.009 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:39:27.009 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:39:27.009 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:39:27.009 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:27.009 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:39:27.009 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:27.009 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:39:27.009 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:39:27.009 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:27.009 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:39:27.009 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:39:27.009 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:39:27.009 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:39:27.009 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:39:27.009 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:39:27.009 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:39:27.009 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:39:27.009 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:39:27.009 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:39:27.009 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:39:27.009 20:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:39:27.009 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:27.009 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:27.009 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:27.009 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:39:27.009 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:39:27.009 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:39:27.009 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:39:27.009 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:39:27.009 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:27.009 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:27.009 [2024-10-01 20:38:22.053905] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:39:27.009 [2024-10-01 20:38:22.054966] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:39:27.010 [2024-10-01 20:38:22.055002] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:39:27.010 [2024-10-01 20:38:22.141020] bdev_nvme.c:7086:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:27.010 [2024-10-01 20:38:22.201446] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:39:27.010 [2024-10-01 20:38:22.201473] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:39:27.010 [2024-10-01 20:38:22.201480] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:39:27.010 20:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:39:28.388 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:39:28.388 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:39:28.388 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:39:28.388 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:39:28.388 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.388 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:39:28.388 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:28.388 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:39:28.388 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:39:28.388 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.388 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:39:28.388 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:39:28.388 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:39:28.388 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:39:28.388 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:39:28.388 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:39:28.388 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:39:28.388 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:39:28.388 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:39:28.388 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:39:28.388 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:39:28.388 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:39:28.389 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.389 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:28.389 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.389 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:39:28.389 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:39:28.389 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:39:28.389 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:39:28.389 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:39:28.389 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.389 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:28.389 [2024-10-01 20:38:23.343063] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:39:28.389 [2024-10-01 20:38:23.343114] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:39:28.389 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.389 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:39:28.389 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:39:28.389 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:39:28.389 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:39:28.389 [2024-10-01 20:38:23.348383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:39:28.389 [2024-10-01 20:38:23.348416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.389 [2024-10-01 20:38:23.348430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:39:28.389 [2024-10-01 20:38:23.348440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.389 [2024-10-01 20:38:23.348451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:39:28.389 [2024-10-01 20:38:23.348461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.389 [2024-10-01 20:38:23.348473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:39:28.389 [2024-10-01 20:38:23.348482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:28.389 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:39:28.389 [2024-10-01 20:38:23.348491] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2487220 is same with the state(6) to be set 00:39:28.389 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:39:28.389 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:39:28.389 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:39:28.389 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.389 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:28.389 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:39:28.389 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:39:28.389 [2024-10-01 20:38:23.358341] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2487220 (9): Bad file descriptor 00:39:28.389 [2024-10-01 20:38:23.368360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:39:28.389 [2024-10-01 20:38:23.368496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.389 [2024-10-01 20:38:23.368518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2487220 with addr=10.0.0.3, port=4420 00:39:28.389 [2024-10-01 20:38:23.368529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2487220 is same with the state(6) to be set 00:39:28.389 [2024-10-01 20:38:23.368545] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2487220 (9): Bad file descriptor 00:39:28.389 [2024-10-01 20:38:23.368558] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:28.389 [2024-10-01 20:38:23.368567] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:39:28.389 [2024-10-01 20:38:23.368576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:28.389 [2024-10-01 20:38:23.368591] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:28.389 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.389 [2024-10-01 20:38:23.378415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:39:28.389 [2024-10-01 20:38:23.378521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.389 [2024-10-01 20:38:23.378541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2487220 with addr=10.0.0.3, port=4420 00:39:28.389 [2024-10-01 20:38:23.378551] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2487220 is same with the state(6) to be set 00:39:28.389 [2024-10-01 20:38:23.378566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2487220 (9): Bad file descriptor 00:39:28.389 [2024-10-01 20:38:23.378579] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:28.389 [2024-10-01 20:38:23.378587] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:39:28.389 [2024-10-01 20:38:23.378595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:28.389 [2024-10-01 20:38:23.378624] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:28.389 [2024-10-01 20:38:23.388477] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:39:28.389 [2024-10-01 20:38:23.388572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.389 [2024-10-01 20:38:23.388593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2487220 with addr=10.0.0.3, port=4420 00:39:28.389 [2024-10-01 20:38:23.388603] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2487220 is same with the state(6) to be set 00:39:28.389 [2024-10-01 20:38:23.388618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2487220 (9): Bad file descriptor 00:39:28.389 [2024-10-01 20:38:23.388631] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:28.389 [2024-10-01 20:38:23.388638] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:39:28.389 [2024-10-01 20:38:23.388647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:28.389 [2024-10-01 20:38:23.388661] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:28.390 [2024-10-01 20:38:23.398542] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:39:28.390 [2024-10-01 20:38:23.398632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.390 [2024-10-01 20:38:23.398651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2487220 with addr=10.0.0.3, port=4420 00:39:28.390 [2024-10-01 20:38:23.398661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2487220 is same with the state(6) to be set 00:39:28.390 [2024-10-01 20:38:23.398676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2487220 (9): Bad file descriptor 00:39:28.390 [2024-10-01 20:38:23.398688] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:28.390 [2024-10-01 20:38:23.398697] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:39:28.390 [2024-10-01 20:38:23.398705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:28.390 [2024-10-01 20:38:23.398719] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:28.390 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:28.390 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:39:28.390 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:39:28.390 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:39:28.390 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:39:28.390 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:39:28.390 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:39:28.390 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:39:28.390 [2024-10-01 20:38:23.408605] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:39:28.390 [2024-10-01 20:38:23.408675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.390 [2024-10-01 20:38:23.408695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2487220 with addr=10.0.0.3, port=4420 00:39:28.390 [2024-10-01 20:38:23.408705] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2487220 is same with the state(6) to be set 00:39:28.390 [2024-10-01 20:38:23.408721] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2487220 (9): Bad file descriptor 00:39:28.390 [2024-10-01 20:38:23.408734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:28.390 [2024-10-01 20:38:23.408742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:39:28.390 [2024-10-01 20:38:23.408751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:28.390 [2024-10-01 20:38:23.408766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:28.390 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:39:28.390 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.390 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:28.390 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:39:28.390 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:39:28.390 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:39:28.390 [2024-10-01 20:38:23.418658] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:39:28.390 [2024-10-01 20:38:23.418734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.390 [2024-10-01 20:38:23.418755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2487220 with addr=10.0.0.3, port=4420 00:39:28.390 [2024-10-01 20:38:23.418765] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2487220 is same with the state(6) to be set 00:39:28.390 [2024-10-01 20:38:23.418780] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2487220 (9): Bad file descriptor 00:39:28.390 [2024-10-01 20:38:23.418793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:28.390 [2024-10-01 20:38:23.418801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:39:28.390 [2024-10-01 20:38:23.418810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:28.390 [2024-10-01 20:38:23.418824] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:28.390 [2024-10-01 20:38:23.428706] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:39:28.390 [2024-10-01 20:38:23.428788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:28.390 [2024-10-01 20:38:23.428808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2487220 with addr=10.0.0.3, port=4420 00:39:28.390 [2024-10-01 20:38:23.428818] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2487220 is same with the state(6) to be set 00:39:28.390 [2024-10-01 20:38:23.428833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2487220 (9): Bad file descriptor 00:39:28.390 [2024-10-01 20:38:23.428846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:28.390 [2024-10-01 20:38:23.428854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:39:28.390 [2024-10-01 20:38:23.428862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:28.390 [2024-10-01 20:38:23.428876] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:28.390 [2024-10-01 20:38:23.430681] bdev_nvme.c:6949:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:39:28.390 [2024-10-01 20:38:23.430706] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:39:28.390 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.390 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:39:28.390 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:39:28.390 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:39:28.390 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:39:28.390 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:39:28.390 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:39:28.390 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:39:28.390 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:39:28.390 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:39:28.390 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:39:28.390 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.390 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:28.390 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:39:28.390 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:39:28.390 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.390 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:39:28.390 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:39:28.391 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:39:28.391 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:39:28.391 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:39:28.391 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:39:28.391 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:39:28.391 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:39:28.391 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:39:28.391 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:39:28.391 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:39:28.391 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.391 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:28.391 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:39:28.391 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.391 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:39:28.391 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:39:28.391 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:39:28.391 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:39:28.391 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:39:28.391 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.391 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:28.391 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.391 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:39:28.391 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:39:28.391 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:39:28.391 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:39:28.391 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:39:28.391 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:39:28.391 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:39:28.391 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:39:28.391 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:39:28.391 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.391 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:28.391 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:39:28.391 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.649 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:39:28.649 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:39:28.650 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:39:28.650 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:39:28.650 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:39:28.650 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:39:28.650 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:39:28.650 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:39:28.650 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:39:28.650 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:39:28.650 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.650 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:28.650 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:39:28.650 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:39:28.650 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.650 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:39:28.650 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:39:28.650 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:39:28.650 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:39:28.650 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:39:28.650 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:39:28.650 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:39:28.650 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:39:28.650 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:39:28.650 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:39:28.650 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:39:28.650 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.650 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:28.650 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:39:28.650 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.650 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:39:28.650 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:39:28.650 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:39:28.650 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:39:28.650 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:39:28.650 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.650 20:38:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:29.586 [2024-10-01 20:38:24.767880] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:39:29.586 [2024-10-01 20:38:24.767907] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:39:29.586 [2024-10-01 20:38:24.767924] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:39:29.845 [2024-10-01 20:38:24.853975] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:39:29.845 [2024-10-01 20:38:24.914330] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:39:29.845 [2024-10-01 20:38:24.914394] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:39:29.845 20:38:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:29.845 20:38:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:39:29.845 20:38:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:39:29.845 20:38:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:39:29.845 20:38:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:39:29.845 20:38:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:29.845 20:38:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:39:29.845 20:38:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:29.845 20:38:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:39:29.845 20:38:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:29.845 20:38:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:29.845 2024/10/01 20:38:24 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:39:29.845 request: 00:39:29.845 { 00:39:29.845 "method": "bdev_nvme_start_discovery", 00:39:29.845 "params": { 00:39:29.845 "name": "nvme", 00:39:29.845 "trtype": "tcp", 00:39:29.845 "traddr": "10.0.0.3", 00:39:29.845 "adrfam": "ipv4", 00:39:29.845 "trsvcid": "8009", 00:39:29.845 "hostnqn": "nqn.2021-12.io.spdk:test", 00:39:29.845 "wait_for_attach": true 00:39:29.845 } 00:39:29.845 } 00:39:29.845 Got JSON-RPC error response 00:39:29.845 GoRPCClient: error on JSON-RPC call 00:39:29.845 20:38:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:39:29.845 20:38:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:39:29.845 20:38:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:29.845 20:38:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:29.845 20:38:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:29.846 20:38:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:39:29.846 20:38:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:39:29.846 20:38:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:39:29.846 20:38:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:29.846 20:38:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:29.846 20:38:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:39:29.846 20:38:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:39:29.846 20:38:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:29.846 20:38:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:39:29.846 20:38:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:39:29.846 20:38:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:39:29.846 20:38:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:29.846 20:38:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:39:29.846 20:38:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:29.846 20:38:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:39:29.846 20:38:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:39:29.846 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:29.846 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:39:29.846 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:39:29.846 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:39:29.846 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:39:29.846 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:39:29.846 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:29.846 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:39:29.846 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:29.846 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:39:29.846 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:29.846 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:29.846 2024/10/01 20:38:25 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:39:29.846 request: 00:39:29.846 { 00:39:29.846 "method": "bdev_nvme_start_discovery", 00:39:29.846 "params": { 00:39:29.846 "name": "nvme_second", 00:39:29.846 "trtype": "tcp", 00:39:29.846 "traddr": "10.0.0.3", 00:39:29.846 "adrfam": "ipv4", 00:39:29.846 "trsvcid": "8009", 00:39:29.846 "hostnqn": "nqn.2021-12.io.spdk:test", 00:39:29.846 "wait_for_attach": true 00:39:29.846 } 00:39:29.846 } 00:39:29.846 Got JSON-RPC error response 00:39:29.846 GoRPCClient: error on JSON-RPC call 00:39:29.846 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:39:29.846 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:39:29.846 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:29.846 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:29.846 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:29.846 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:39:29.846 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:39:29.846 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:29.846 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:39:29.846 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:29.846 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:39:29.846 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:39:29.846 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:30.105 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:39:30.105 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:39:30.105 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:39:30.105 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:30.105 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:30.105 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:39:30.105 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:39:30.105 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:39:30.105 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:30.105 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:39:30.105 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:39:30.105 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:39:30.105 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:39:30.105 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:39:30.105 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:30.105 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:39:30.105 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:30.105 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:39:30.105 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:30.105 20:38:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:31.040 [2024-10-01 20:38:26.175182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:31.040 [2024-10-01 20:38:26.175238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2511800 with addr=10.0.0.3, port=8010 00:39:31.040 [2024-10-01 20:38:26.175256] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:39:31.040 [2024-10-01 20:38:26.175265] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:39:31.040 [2024-10-01 20:38:26.175274] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:39:31.973 [2024-10-01 20:38:27.175158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:31.973 [2024-10-01 20:38:27.175210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2510fb0 with addr=10.0.0.3, port=8010 00:39:31.973 [2024-10-01 20:38:27.175226] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:39:31.973 [2024-10-01 20:38:27.175235] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:39:31.973 [2024-10-01 20:38:27.175243] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:39:33.345 [2024-10-01 20:38:28.175055] bdev_nvme.c:7205:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:39:33.345 2024/10/01 20:38:28 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:39:33.345 request: 00:39:33.345 { 00:39:33.345 "method": "bdev_nvme_start_discovery", 00:39:33.345 "params": { 00:39:33.345 "name": "nvme_second", 00:39:33.345 "trtype": "tcp", 00:39:33.345 "traddr": "10.0.0.3", 00:39:33.345 "adrfam": "ipv4", 00:39:33.345 "trsvcid": "8010", 00:39:33.345 "hostnqn": "nqn.2021-12.io.spdk:test", 00:39:33.345 "wait_for_attach": false, 00:39:33.345 "attach_timeout_ms": 3000 00:39:33.345 } 00:39:33.345 } 00:39:33.345 Got JSON-RPC error response 00:39:33.345 GoRPCClient: error on JSON-RPC call 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 88693 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:33.345 rmmod nvme_tcp 00:39:33.345 rmmod nvme_fabrics 00:39:33.345 rmmod nvme_keyring 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@515 -- # '[' -n 88658 ']' 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # killprocess 88658 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 88658 ']' 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 88658 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88658 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:33.345 killing process with pid 88658 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88658' 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 88658 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 88658 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-save 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:33.345 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:39:33.346 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:39:33.603 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:39:33.603 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:39:33.603 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:39:33.603 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:39:33.603 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:39:33.603 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:39:33.603 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:39:33.603 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:39:33.603 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:39:33.603 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:39:33.603 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:33.603 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:33.603 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:39:33.603 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:33.603 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:33.603 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:33.603 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:39:33.603 00:39:33.603 real 0m10.144s 00:39:33.603 user 0m19.841s 00:39:33.603 sys 0m1.623s 00:39:33.603 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:33.603 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:33.603 ************************************ 00:39:33.603 END TEST nvmf_host_discovery 00:39:33.603 ************************************ 00:39:33.867 20:38:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:39:33.867 20:38:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:39:33.867 20:38:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:33.867 20:38:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:39:33.867 ************************************ 00:39:33.867 START TEST nvmf_host_multipath_status 00:39:33.867 ************************************ 00:39:33.867 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:39:33.867 * Looking for test storage... 00:39:33.867 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:39:33.867 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:33.867 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:39:33.867 20:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:33.867 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:33.867 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:33.867 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:33.867 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:33.867 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:39:33.867 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:39:33.867 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:39:33.867 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:39:33.867 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:39:33.867 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:39:33.867 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:39:33.867 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:33.867 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:39:33.867 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:39:33.867 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:33.867 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:33.867 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:39:33.867 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:39:33.867 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:33.867 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:39:33.867 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:39:33.867 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:39:33.867 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:39:33.867 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:33.867 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:39:33.867 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:39:33.867 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:33.867 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:33.867 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:39:33.867 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:33.867 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:33.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:33.867 --rc genhtml_branch_coverage=1 00:39:33.867 --rc genhtml_function_coverage=1 00:39:33.867 --rc genhtml_legend=1 00:39:33.867 --rc geninfo_all_blocks=1 00:39:33.867 --rc geninfo_unexecuted_blocks=1 00:39:33.867 00:39:33.867 ' 00:39:33.867 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:33.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:33.867 --rc genhtml_branch_coverage=1 00:39:33.867 --rc genhtml_function_coverage=1 00:39:33.867 --rc genhtml_legend=1 00:39:33.867 --rc geninfo_all_blocks=1 00:39:33.867 --rc geninfo_unexecuted_blocks=1 00:39:33.867 00:39:33.867 ' 00:39:33.867 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:33.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:33.867 --rc genhtml_branch_coverage=1 00:39:33.867 --rc genhtml_function_coverage=1 00:39:33.867 --rc genhtml_legend=1 00:39:33.867 --rc geninfo_all_blocks=1 00:39:33.867 --rc geninfo_unexecuted_blocks=1 00:39:33.867 00:39:33.867 ' 00:39:33.867 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:33.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:33.867 --rc genhtml_branch_coverage=1 00:39:33.867 --rc genhtml_function_coverage=1 00:39:33.867 --rc genhtml_legend=1 00:39:33.867 --rc geninfo_all_blocks=1 00:39:33.867 --rc geninfo_unexecuted_blocks=1 00:39:33.867 00:39:33.867 ' 00:39:33.867 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:39:33.867 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:39:33.867 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:33.867 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:33.867 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:33.867 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:33.868 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:33.868 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:33.868 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:33.868 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:33.868 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:33.868 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:33.868 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:39:33.868 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:39:33.868 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:33.868 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:33.868 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:39:33.868 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:33.868 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:33.868 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:33.872 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # nvmf_veth_init 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:39:33.872 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:39:33.873 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:39:33.873 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:39:33.873 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:39:33.873 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:33.873 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:39:33.873 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:39:33.873 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:39:33.873 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:39:33.873 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:39:33.873 Cannot find device "nvmf_init_br" 00:39:33.873 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:39:33.873 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:39:33.873 Cannot find device "nvmf_init_br2" 00:39:34.132 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:39:34.132 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:39:34.132 Cannot find device "nvmf_tgt_br" 00:39:34.132 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:39:34.132 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:39:34.132 Cannot find device "nvmf_tgt_br2" 00:39:34.132 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:39:34.132 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:39:34.132 Cannot find device "nvmf_init_br" 00:39:34.132 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:39:34.132 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:39:34.132 Cannot find device "nvmf_init_br2" 00:39:34.132 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:39:34.132 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:39:34.132 Cannot find device "nvmf_tgt_br" 00:39:34.132 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:39:34.132 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:39:34.132 Cannot find device "nvmf_tgt_br2" 00:39:34.132 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:39:34.132 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:39:34.132 Cannot find device "nvmf_br" 00:39:34.132 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:39:34.132 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:39:34.132 Cannot find device "nvmf_init_if" 00:39:34.132 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:39:34.132 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:39:34.132 Cannot find device "nvmf_init_if2" 00:39:34.132 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:39:34.132 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:34.132 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:34.132 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:39:34.132 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:34.132 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:34.133 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:39:34.133 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:39:34.133 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:39:34.133 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:39:34.133 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:39:34.133 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:39:34.133 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:39:34.133 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:39:34.133 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:39:34.133 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:39:34.133 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:39:34.133 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:39:34.133 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:39:34.133 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:39:34.392 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:39:34.392 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:39:34.392 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:39:34.392 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:39:34.392 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:39:34.392 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:39:34.392 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:39:34.392 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:39:34.392 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:39:34.392 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:39:34.392 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:39:34.392 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:39:34.392 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:39:34.392 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:39:34.393 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:39:34.393 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:39:34.393 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:39:34.393 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:39:34.393 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:39:34.393 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:39:34.393 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:39:34.393 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:39:34.393 00:39:34.393 --- 10.0.0.3 ping statistics --- 00:39:34.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:34.393 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:39:34.393 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:39:34.393 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:39:34.393 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:39:34.393 00:39:34.393 --- 10.0.0.4 ping statistics --- 00:39:34.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:34.393 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:39:34.393 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:39:34.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:34.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:39:34.393 00:39:34.393 --- 10.0.0.1 ping statistics --- 00:39:34.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:34.393 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:39:34.393 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:39:34.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:34.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:39:34.393 00:39:34.393 --- 10.0.0.2 ping statistics --- 00:39:34.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:34.393 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:39:34.393 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:34.393 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # return 0 00:39:34.393 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:39:34.393 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:34.393 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:39:34.393 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:39:34.393 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:34.393 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:39:34.393 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:39:34.393 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:39:34.393 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:39:34.393 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:34.393 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:39:34.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:34.393 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=89214 00:39:34.393 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:39:34.393 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 89214 00:39:34.393 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 89214 ']' 00:39:34.393 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:34.393 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:34.393 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:34.393 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:34.393 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:39:34.393 [2024-10-01 20:38:29.610756] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:39:34.393 [2024-10-01 20:38:29.611010] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:34.652 [2024-10-01 20:38:29.750791] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:34.652 [2024-10-01 20:38:29.821165] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:34.652 [2024-10-01 20:38:29.821465] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:34.652 [2024-10-01 20:38:29.821764] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:34.652 [2024-10-01 20:38:29.821954] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:34.652 [2024-10-01 20:38:29.822129] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:34.652 [2024-10-01 20:38:29.822290] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:34.652 [2024-10-01 20:38:29.822303] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:34.911 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:34.911 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:39:34.911 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:39:34.911 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:34.911 20:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:39:34.911 20:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:34.911 20:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=89214 00:39:34.911 20:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:35.169 [2024-10-01 20:38:30.307649] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:35.169 20:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:39:35.428 Malloc0 00:39:35.428 20:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:39:35.996 20:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:35.996 20:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:39:36.256 [2024-10-01 20:38:31.373927] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:39:36.256 20:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:39:36.515 [2024-10-01 20:38:31.718183] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:39:36.515 20:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=89304 00:39:36.515 20:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:39:36.515 20:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 89304 /var/tmp/bdevperf.sock 00:39:36.515 20:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 89304 ']' 00:39:36.515 20:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:36.515 20:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:36.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:36.515 20:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:36.515 20:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:36.515 20:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:39:36.515 20:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:39:37.084 20:38:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:37.084 20:38:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:39:37.084 20:38:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:39:37.343 20:38:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:39:37.603 Nvme0n1 00:39:37.603 20:38:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:39:38.171 Nvme0n1 00:39:38.171 20:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:39:38.171 20:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:39:40.076 20:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:39:40.077 20:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:39:40.335 20:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:39:40.594 20:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:39:41.971 20:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:39:41.971 20:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:39:41.971 20:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:41.971 20:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:39:41.971 20:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:39:41.971 20:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:39:41.971 20:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:41.971 20:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:39:42.230 20:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:39:42.230 20:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:39:42.230 20:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:42.230 20:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:39:42.489 20:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:39:42.489 20:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:39:42.489 20:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:39:42.489 20:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:42.747 20:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:39:42.747 20:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:39:42.747 20:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:42.747 20:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:39:43.006 20:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:39:43.006 20:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:39:43.006 20:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:43.006 20:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:39:43.264 20:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:39:43.264 20:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:39:43.264 20:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:39:43.522 20:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:39:43.781 20:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:39:44.716 20:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:39:44.716 20:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:39:44.716 20:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:44.716 20:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:39:44.974 20:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:39:44.974 20:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:39:44.974 20:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:44.974 20:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:39:45.233 20:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:39:45.233 20:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:39:45.233 20:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:45.233 20:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:39:45.492 20:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:39:45.492 20:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:39:45.492 20:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:45.492 20:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:39:45.750 20:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:39:45.750 20:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:39:45.750 20:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:39:45.750 20:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:46.009 20:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:39:46.009 20:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:39:46.009 20:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:46.009 20:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:39:46.268 20:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:39:46.268 20:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:39:46.268 20:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:39:46.526 20:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:39:46.784 20:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:39:48.161 20:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:39:48.161 20:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:39:48.161 20:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:48.161 20:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:39:48.161 20:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:39:48.161 20:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:39:48.161 20:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:48.161 20:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:39:48.419 20:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:39:48.419 20:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:39:48.419 20:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:48.419 20:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:39:48.678 20:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:39:48.678 20:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:39:48.678 20:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:48.678 20:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:39:48.937 20:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:39:48.937 20:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:39:48.937 20:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:48.937 20:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:39:49.196 20:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:39:49.196 20:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:39:49.196 20:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:49.196 20:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:39:49.472 20:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:39:49.472 20:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:39:49.472 20:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:39:49.766 20:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:39:50.050 20:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:39:50.986 20:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:39:50.986 20:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:39:50.986 20:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:50.986 20:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:39:51.245 20:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:39:51.245 20:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:39:51.245 20:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:51.245 20:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:39:51.505 20:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:39:51.505 20:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:39:51.505 20:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:51.505 20:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:39:51.764 20:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:39:51.764 20:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:39:51.764 20:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:51.764 20:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:39:52.331 20:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:39:52.331 20:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:39:52.331 20:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:52.331 20:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:39:52.331 20:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:39:52.331 20:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:39:52.331 20:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:52.331 20:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:39:52.590 20:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:39:52.590 20:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:39:52.590 20:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:39:52.849 20:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:39:53.108 20:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:39:54.480 20:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:39:54.480 20:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:39:54.480 20:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:54.480 20:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:39:54.480 20:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:39:54.480 20:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:39:54.480 20:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:54.480 20:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:39:54.738 20:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:39:54.738 20:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:39:54.738 20:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:54.738 20:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:39:54.996 20:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:39:54.996 20:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:39:54.996 20:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:54.996 20:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:39:55.254 20:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:39:55.254 20:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:39:55.254 20:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:55.254 20:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:39:55.512 20:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:39:55.512 20:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:39:55.512 20:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:55.512 20:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:39:55.771 20:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:39:55.771 20:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:39:55.771 20:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:39:56.029 20:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:39:56.287 20:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:39:57.222 20:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:39:57.222 20:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:39:57.222 20:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:57.222 20:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:39:57.481 20:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:39:57.481 20:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:39:57.481 20:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:57.481 20:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:39:57.740 20:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:39:57.740 20:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:39:57.740 20:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:57.740 20:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:39:57.999 20:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:39:57.999 20:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:39:57.999 20:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:57.999 20:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:39:58.259 20:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:39:58.259 20:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:39:58.259 20:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:58.259 20:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:39:58.518 20:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:39:58.518 20:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:39:58.518 20:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:39:58.518 20:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:39:59.087 20:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:39:59.087 20:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:39:59.087 20:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:39:59.087 20:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:39:59.347 20:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:39:59.607 20:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:40:00.545 20:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:40:00.545 20:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:40:00.545 20:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:00.545 20:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:00.805 20:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:00.805 20:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:40:00.805 20:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:00.805 20:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:01.065 20:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:01.065 20:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:01.065 20:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:01.065 20:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:01.325 20:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:01.325 20:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:01.325 20:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:01.325 20:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:01.584 20:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:01.584 20:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:01.584 20:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:01.584 20:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:01.843 20:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:01.843 20:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:40:01.843 20:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:01.843 20:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:02.412 20:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:02.412 20:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:40:02.412 20:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:40:02.412 20:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:40:02.671 20:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:40:03.610 20:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:40:03.610 20:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:40:03.610 20:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:03.610 20:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:04.179 20:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:04.179 20:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:40:04.179 20:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:04.179 20:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:04.438 20:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:04.438 20:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:04.438 20:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:04.438 20:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:04.698 20:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:04.698 20:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:04.698 20:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:04.698 20:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:04.958 20:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:04.958 20:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:04.958 20:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:04.958 20:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:05.217 20:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:05.217 20:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:40:05.217 20:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:05.217 20:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:05.476 20:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:05.476 20:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:40:05.476 20:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:40:05.735 20:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:40:05.994 20:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:40:06.932 20:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:40:06.932 20:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:40:06.932 20:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:06.932 20:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:07.192 20:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:07.192 20:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:40:07.192 20:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:07.192 20:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:07.451 20:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:07.451 20:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:07.451 20:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:07.451 20:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:07.709 20:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:07.709 20:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:07.709 20:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:07.709 20:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:07.967 20:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:07.967 20:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:07.967 20:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:07.967 20:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:08.536 20:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:08.536 20:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:40:08.536 20:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:08.536 20:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:08.795 20:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:08.795 20:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:40:08.795 20:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:40:09.054 20:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:40:09.314 20:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:40:10.251 20:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:40:10.251 20:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:40:10.251 20:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:10.252 20:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:10.510 20:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:10.510 20:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:40:10.510 20:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:10.510 20:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:10.768 20:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:10.768 20:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:10.768 20:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:10.768 20:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:11.025 20:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:11.025 20:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:11.025 20:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:11.025 20:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:11.284 20:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:11.284 20:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:11.284 20:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:11.284 20:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:11.542 20:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:11.542 20:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:40:11.542 20:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:11.542 20:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:11.801 20:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:11.801 20:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 89304 00:40:11.801 20:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 89304 ']' 00:40:11.801 20:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 89304 00:40:11.801 20:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:40:11.801 20:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:11.801 20:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89304 00:40:11.801 killing process with pid 89304 00:40:11.801 20:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:40:11.801 20:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:40:11.801 20:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89304' 00:40:11.801 20:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 89304 00:40:11.801 20:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 89304 00:40:11.801 { 00:40:11.801 "results": [ 00:40:11.801 { 00:40:11.801 "job": "Nvme0n1", 00:40:11.801 "core_mask": "0x4", 00:40:11.801 "workload": "verify", 00:40:11.801 "status": "terminated", 00:40:11.801 "verify_range": { 00:40:11.801 "start": 0, 00:40:11.801 "length": 16384 00:40:11.801 }, 00:40:11.801 "queue_depth": 128, 00:40:11.801 "io_size": 4096, 00:40:11.801 "runtime": 33.661473, 00:40:11.801 "iops": 9506.803222782319, 00:40:11.801 "mibps": 37.13595008899343, 00:40:11.801 "io_failed": 0, 00:40:11.801 "io_timeout": 0, 00:40:11.801 "avg_latency_us": 13438.244446592085, 00:40:11.801 "min_latency_us": 140.56727272727272, 00:40:11.801 "max_latency_us": 4026531.84 00:40:11.801 } 00:40:11.801 ], 00:40:11.801 "core_count": 1 00:40:11.801 } 00:40:12.086 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 89304 00:40:12.086 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:40:12.086 [2024-10-01 20:38:31.800258] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:40:12.086 [2024-10-01 20:38:31.800356] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89304 ] 00:40:12.086 [2024-10-01 20:38:31.938252] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:12.086 [2024-10-01 20:38:32.007297] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:40:12.086 [2024-10-01 20:38:33.148378] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:40:12.086 Running I/O for 90 seconds... 00:40:12.086 10274.00 IOPS, 40.13 MiB/s 10423.50 IOPS, 40.72 MiB/s 10423.33 IOPS, 40.72 MiB/s 10407.75 IOPS, 40.66 MiB/s 10330.00 IOPS, 40.35 MiB/s 10331.17 IOPS, 40.36 MiB/s 10317.43 IOPS, 40.30 MiB/s 10267.75 IOPS, 40.11 MiB/s 10250.78 IOPS, 40.04 MiB/s 10283.50 IOPS, 40.17 MiB/s 10259.18 IOPS, 40.07 MiB/s 10236.67 IOPS, 39.99 MiB/s 10265.85 IOPS, 40.10 MiB/s 10255.79 IOPS, 40.06 MiB/s [2024-10-01 20:38:48.004297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.086 [2024-10-01 20:38:48.004364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:40:12.086 [2024-10-01 20:38:48.004424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.086 [2024-10-01 20:38:48.004463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:40:12.086 [2024-10-01 20:38:48.004515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.086 [2024-10-01 20:38:48.004529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:40:12.086 [2024-10-01 20:38:48.004547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.086 [2024-10-01 20:38:48.004560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:40:12.086 [2024-10-01 20:38:48.004580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.086 [2024-10-01 20:38:48.004593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:40:12.086 [2024-10-01 20:38:48.004611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.086 [2024-10-01 20:38:48.004623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:40:12.086 [2024-10-01 20:38:48.004641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.086 [2024-10-01 20:38:48.004654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:12.086 [2024-10-01 20:38:48.004672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.086 [2024-10-01 20:38:48.004685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:40:12.086 [2024-10-01 20:38:48.004704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.086 [2024-10-01 20:38:48.004757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:40:12.086 [2024-10-01 20:38:48.004779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.087 [2024-10-01 20:38:48.004793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:40:12.087 [2024-10-01 20:38:48.004813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.087 [2024-10-01 20:38:48.004826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:40:12.087 [2024-10-01 20:38:48.004846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.087 [2024-10-01 20:38:48.004860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:40:12.087 [2024-10-01 20:38:48.004878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.087 [2024-10-01 20:38:48.004891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:40:12.087 [2024-10-01 20:38:48.004910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.087 [2024-10-01 20:38:48.004923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:40:12.087 [2024-10-01 20:38:48.004942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.087 [2024-10-01 20:38:48.004954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:40:12.087 [2024-10-01 20:38:48.004973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.087 [2024-10-01 20:38:48.004986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:40:12.087 [2024-10-01 20:38:48.005005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.087 [2024-10-01 20:38:48.005018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:40:12.087 [2024-10-01 20:38:48.005037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.087 [2024-10-01 20:38:48.005051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:40:12.087 [2024-10-01 20:38:48.005069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.087 [2024-10-01 20:38:48.005083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:40:12.087 [2024-10-01 20:38:48.005101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.087 [2024-10-01 20:38:48.005130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:40:12.087 [2024-10-01 20:38:48.005167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:25584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.087 [2024-10-01 20:38:48.005182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:40:12.087 [2024-10-01 20:38:48.005232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.087 [2024-10-01 20:38:48.005250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:40:12.087 [2024-10-01 20:38:48.005273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.087 [2024-10-01 20:38:48.005288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:40:12.087 [2024-10-01 20:38:48.005310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.087 [2024-10-01 20:38:48.005325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:40:12.087 [2024-10-01 20:38:48.005346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:25616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.087 [2024-10-01 20:38:48.005361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:40:12.087 [2024-10-01 20:38:48.005383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.087 [2024-10-01 20:38:48.005398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:40:12.087 [2024-10-01 20:38:48.005419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.087 [2024-10-01 20:38:48.005434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:40:12.087 [2024-10-01 20:38:48.005456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.087 [2024-10-01 20:38:48.005470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:40:12.087 [2024-10-01 20:38:48.005521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.087 [2024-10-01 20:38:48.005534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:40:12.087 [2024-10-01 20:38:48.005554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.087 [2024-10-01 20:38:48.005582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:40:12.087 [2024-10-01 20:38:48.005600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.087 [2024-10-01 20:38:48.005613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:40:12.087 [2024-10-01 20:38:48.005632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.087 [2024-10-01 20:38:48.005645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:40:12.087 [2024-10-01 20:38:48.005663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.087 [2024-10-01 20:38:48.005677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:40:12.087 [2024-10-01 20:38:48.005704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.087 [2024-10-01 20:38:48.005734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:40:12.087 [2024-10-01 20:38:48.005755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.087 [2024-10-01 20:38:48.005768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:40:12.087 [2024-10-01 20:38:48.005788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.087 [2024-10-01 20:38:48.005801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:12.087 [2024-10-01 20:38:48.005820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.087 [2024-10-01 20:38:48.005834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:12.087 [2024-10-01 20:38:48.005854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:25720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.088 [2024-10-01 20:38:48.005868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.088 [2024-10-01 20:38:48.005887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.088 [2024-10-01 20:38:48.005901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:12.088 [2024-10-01 20:38:48.005920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.088 [2024-10-01 20:38:48.005934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:40:12.088 [2024-10-01 20:38:48.007879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.088 [2024-10-01 20:38:48.007910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:40:12.088 [2024-10-01 20:38:48.007943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.088 [2024-10-01 20:38:48.007960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:40:12.088 [2024-10-01 20:38:48.007987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.088 [2024-10-01 20:38:48.008003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:40:12.088 [2024-10-01 20:38:48.008031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.088 [2024-10-01 20:38:48.008047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:40:12.088 [2024-10-01 20:38:48.008074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.088 [2024-10-01 20:38:48.008090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:40:12.088 [2024-10-01 20:38:48.008117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.088 [2024-10-01 20:38:48.008176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:40:12.088 [2024-10-01 20:38:48.008207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:25152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.088 [2024-10-01 20:38:48.008223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:40:12.088 [2024-10-01 20:38:48.008250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.088 [2024-10-01 20:38:48.008265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:40:12.088 [2024-10-01 20:38:48.008292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.088 [2024-10-01 20:38:48.008307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:40:12.088 [2024-10-01 20:38:48.008333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:25176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.088 [2024-10-01 20:38:48.008348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:40:12.088 [2024-10-01 20:38:48.008374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.088 [2024-10-01 20:38:48.008389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:40:12.088 [2024-10-01 20:38:48.008416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:25192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.088 [2024-10-01 20:38:48.008430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:40:12.088 [2024-10-01 20:38:48.008470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.088 [2024-10-01 20:38:48.008500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:40:12.088 [2024-10-01 20:38:48.008525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:25208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.088 [2024-10-01 20:38:48.008539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:40:12.088 [2024-10-01 20:38:48.008563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.088 [2024-10-01 20:38:48.008578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:40:12.088 [2024-10-01 20:38:48.008602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.088 [2024-10-01 20:38:48.008616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:40:12.088 [2024-10-01 20:38:48.008641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.088 [2024-10-01 20:38:48.008655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:40:12.088 [2024-10-01 20:38:48.008679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.088 [2024-10-01 20:38:48.008701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:40:12.088 [2024-10-01 20:38:48.008727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.088 [2024-10-01 20:38:48.008741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:40:12.088 [2024-10-01 20:38:48.008766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.088 [2024-10-01 20:38:48.008780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:40:12.088 [2024-10-01 20:38:48.008804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.088 [2024-10-01 20:38:48.008818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:40:12.088 [2024-10-01 20:38:48.008843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:25272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.088 [2024-10-01 20:38:48.008857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:40:12.088 [2024-10-01 20:38:48.008882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.088 [2024-10-01 20:38:48.008896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:40:12.088 [2024-10-01 20:38:48.008921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.089 [2024-10-01 20:38:48.008935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:40:12.089 [2024-10-01 20:38:48.008960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:25296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.089 [2024-10-01 20:38:48.008974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:40:12.089 [2024-10-01 20:38:48.008998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.089 [2024-10-01 20:38:48.009012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:40:12.089 [2024-10-01 20:38:48.009037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:25312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.089 [2024-10-01 20:38:48.009051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:40:12.089 [2024-10-01 20:38:48.009076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.089 [2024-10-01 20:38:48.009090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:40:12.089 [2024-10-01 20:38:48.009131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:25328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.089 [2024-10-01 20:38:48.009162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:40:12.089 [2024-10-01 20:38:48.009215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:25336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.089 [2024-10-01 20:38:48.009234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:40:12.089 [2024-10-01 20:38:48.009271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:25344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.089 [2024-10-01 20:38:48.009287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:12.089 [2024-10-01 20:38:48.009314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.089 [2024-10-01 20:38:48.009330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:40:12.089 [2024-10-01 20:38:48.009356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.089 [2024-10-01 20:38:48.009372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:40:12.089 [2024-10-01 20:38:48.009399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.089 [2024-10-01 20:38:48.009414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:40:12.089 [2024-10-01 20:38:48.009443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.089 [2024-10-01 20:38:48.009459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:40:12.089 [2024-10-01 20:38:48.009486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.089 [2024-10-01 20:38:48.009501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:40:12.089 [2024-10-01 20:38:48.009557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:25392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.089 [2024-10-01 20:38:48.009571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:40:12.089 [2024-10-01 20:38:48.009596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.089 [2024-10-01 20:38:48.009610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:40:12.089 [2024-10-01 20:38:48.009635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.089 [2024-10-01 20:38:48.009649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:40:12.089 [2024-10-01 20:38:48.009673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.089 [2024-10-01 20:38:48.009688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:40:12.089 [2024-10-01 20:38:48.009801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.089 [2024-10-01 20:38:48.009821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:40:12.089 10039.87 IOPS, 39.22 MiB/s 9412.38 IOPS, 36.77 MiB/s 8858.71 IOPS, 34.60 MiB/s 8366.56 IOPS, 32.68 MiB/s 8103.84 IOPS, 31.66 MiB/s 8208.90 IOPS, 32.07 MiB/s 8297.48 IOPS, 32.41 MiB/s 8496.05 IOPS, 33.19 MiB/s 8683.83 IOPS, 33.92 MiB/s 8846.71 IOPS, 34.56 MiB/s 8943.48 IOPS, 34.94 MiB/s 8997.54 IOPS, 35.15 MiB/s 9036.19 IOPS, 35.30 MiB/s 9104.29 IOPS, 35.56 MiB/s 9227.55 IOPS, 36.05 MiB/s 9345.77 IOPS, 36.51 MiB/s 9447.94 IOPS, 36.91 MiB/s [2024-10-01 20:39:04.358658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:117248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.089 [2024-10-01 20:39:04.358706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:40:12.089 [2024-10-01 20:39:04.358738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:117280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.089 [2024-10-01 20:39:04.358755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:12.089 [2024-10-01 20:39:04.358790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:117480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.089 [2024-10-01 20:39:04.358806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:40:12.089 [2024-10-01 20:39:04.358826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:117496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.089 [2024-10-01 20:39:04.358839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:40:12.089 [2024-10-01 20:39:04.358859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:117512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.089 [2024-10-01 20:39:04.358872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:40:12.089 [2024-10-01 20:39:04.358891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:117528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.089 [2024-10-01 20:39:04.358905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:40:12.089 [2024-10-01 20:39:04.358924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:117544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.089 [2024-10-01 20:39:04.358937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:40:12.089 [2024-10-01 20:39:04.358956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:117560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.089 [2024-10-01 20:39:04.358969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:40:12.089 [2024-10-01 20:39:04.358988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:117080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.089 [2024-10-01 20:39:04.359002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:40:12.089 [2024-10-01 20:39:04.359021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:117112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.089 [2024-10-01 20:39:04.359034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:40:12.089 [2024-10-01 20:39:04.359053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:117144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.089 [2024-10-01 20:39:04.359066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:40:12.089 [2024-10-01 20:39:04.359086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:117176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.090 [2024-10-01 20:39:04.359134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:40:12.090 [2024-10-01 20:39:04.359160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:117208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.090 [2024-10-01 20:39:04.359175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:40:12.090 [2024-10-01 20:39:04.359194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.090 [2024-10-01 20:39:04.359208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:40:12.090 [2024-10-01 20:39:04.359228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:117272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.090 [2024-10-01 20:39:04.359241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:40:12.090 [2024-10-01 20:39:04.360261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:117568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.090 [2024-10-01 20:39:04.360290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:40:12.090 [2024-10-01 20:39:04.360315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:117584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.090 [2024-10-01 20:39:04.360332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:40:12.090 [2024-10-01 20:39:04.360351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:117600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.090 [2024-10-01 20:39:04.360365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:40:12.090 [2024-10-01 20:39:04.360384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:117616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.090 [2024-10-01 20:39:04.360397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:40:12.090 [2024-10-01 20:39:04.360415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:117632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.090 [2024-10-01 20:39:04.360429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:40:12.090 [2024-10-01 20:39:04.360448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:117648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.090 [2024-10-01 20:39:04.360461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:40:12.090 [2024-10-01 20:39:04.360480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:117664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.090 [2024-10-01 20:39:04.360493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:40:12.090 [2024-10-01 20:39:04.360512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:117680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.090 [2024-10-01 20:39:04.360525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:40:12.090 [2024-10-01 20:39:04.360544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:117696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.090 [2024-10-01 20:39:04.360568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:40:12.090 [2024-10-01 20:39:04.360591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:117712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.090 [2024-10-01 20:39:04.360605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:40:12.090 [2024-10-01 20:39:04.360624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:117728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.090 [2024-10-01 20:39:04.360637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:40:12.090 [2024-10-01 20:39:04.360656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:117744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.090 [2024-10-01 20:39:04.360669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:40:12.090 [2024-10-01 20:39:04.360688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:117760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.090 [2024-10-01 20:39:04.360701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:40:12.090 [2024-10-01 20:39:04.360720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.090 [2024-10-01 20:39:04.360734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:40:12.090 [2024-10-01 20:39:04.360753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:117344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.090 [2024-10-01 20:39:04.360766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:40:12.090 [2024-10-01 20:39:04.360785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.090 [2024-10-01 20:39:04.360799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:40:12.090 [2024-10-01 20:39:04.360818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.090 [2024-10-01 20:39:04.360831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:40:12.090 [2024-10-01 20:39:04.360852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:117784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.090 [2024-10-01 20:39:04.360865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:40:12.090 [2024-10-01 20:39:04.360884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.090 [2024-10-01 20:39:04.360898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:12.090 [2024-10-01 20:39:04.360917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.090 [2024-10-01 20:39:04.360931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:40:12.090 [2024-10-01 20:39:04.360949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.090 [2024-10-01 20:39:04.360962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:40:12.090 [2024-10-01 20:39:04.360989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.090 [2024-10-01 20:39:04.361005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:40:12.090 [2024-10-01 20:39:04.361024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.090 [2024-10-01 20:39:04.361037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:40:12.090 [2024-10-01 20:39:04.361055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.090 [2024-10-01 20:39:04.361069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:40:12.090 [2024-10-01 20:39:04.361088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:117896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.090 [2024-10-01 20:39:04.361101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:40:12.090 [2024-10-01 20:39:04.361152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.090 [2024-10-01 20:39:04.361168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:40:12.090 [2024-10-01 20:39:04.361188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:117928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.090 [2024-10-01 20:39:04.361202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:40:12.091 [2024-10-01 20:39:04.361221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.091 [2024-10-01 20:39:04.361235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:40:12.091 [2024-10-01 20:39:04.361254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:117304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.091 [2024-10-01 20:39:04.361268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:40:12.091 [2024-10-01 20:39:04.361287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:117336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.091 [2024-10-01 20:39:04.361301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:40:12.091 [2024-10-01 20:39:04.361320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.091 [2024-10-01 20:39:04.361334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:40:12.091 [2024-10-01 20:39:04.361353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:117400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.091 [2024-10-01 20:39:04.361366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:40:12.091 [2024-10-01 20:39:04.361386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.091 [2024-10-01 20:39:04.361400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:40:12.091 [2024-10-01 20:39:04.361429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.091 [2024-10-01 20:39:04.361444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:40:12.091 [2024-10-01 20:39:04.362928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:117968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.091 [2024-10-01 20:39:04.362956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:40:12.091 [2024-10-01 20:39:04.362982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:117984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.091 [2024-10-01 20:39:04.362997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:40:12.091 [2024-10-01 20:39:04.363016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:118000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.091 [2024-10-01 20:39:04.363029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:40:12.091 [2024-10-01 20:39:04.363049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.091 [2024-10-01 20:39:04.363063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:40:12.091 [2024-10-01 20:39:04.363082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:117536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.091 [2024-10-01 20:39:04.363095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:40:12.091 [2024-10-01 20:39:04.363128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.091 [2024-10-01 20:39:04.363147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:40:12.091 [2024-10-01 20:39:04.363167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.091 [2024-10-01 20:39:04.363180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:40:12.091 [2024-10-01 20:39:04.363199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.091 [2024-10-01 20:39:04.363212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:40:12.091 [2024-10-01 20:39:04.363231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.091 [2024-10-01 20:39:04.363244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:40:12.091 [2024-10-01 20:39:04.363263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.091 [2024-10-01 20:39:04.363276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:40:12.091 [2024-10-01 20:39:04.363295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.091 [2024-10-01 20:39:04.363308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:40:12.091 [2024-10-01 20:39:04.363327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.091 [2024-10-01 20:39:04.363352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:40:12.091 [2024-10-01 20:39:04.363372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:117280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.091 [2024-10-01 20:39:04.363386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:40:12.091 [2024-10-01 20:39:04.363405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.091 [2024-10-01 20:39:04.363418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:12.091 [2024-10-01 20:39:04.363437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:117528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.091 [2024-10-01 20:39:04.363450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:12.091 [2024-10-01 20:39:04.363470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:117560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.091 [2024-10-01 20:39:04.363483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.091 [2024-10-01 20:39:04.363502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:117112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.091 [2024-10-01 20:39:04.363515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:12.091 [2024-10-01 20:39:04.363535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:117176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.091 [2024-10-01 20:39:04.363548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:40:12.091 [2024-10-01 20:39:04.363567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:117240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.091 [2024-10-01 20:39:04.363581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:40:12.091 [2024-10-01 20:39:04.363599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.091 [2024-10-01 20:39:04.363616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:40:12.091 [2024-10-01 20:39:04.363636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:118104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.091 [2024-10-01 20:39:04.363650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:40:12.091 [2024-10-01 20:39:04.363669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.091 [2024-10-01 20:39:04.363682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:40:12.092 [2024-10-01 20:39:04.363701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:117592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.092 [2024-10-01 20:39:04.363714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:40:12.092 [2024-10-01 20:39:04.363733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:117624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.092 [2024-10-01 20:39:04.363753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:40:12.092 [2024-10-01 20:39:04.363799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:117656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.092 [2024-10-01 20:39:04.363816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:40:12.092 [2024-10-01 20:39:04.363837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:117688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.092 [2024-10-01 20:39:04.363851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:40:12.092 [2024-10-01 20:39:04.363871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:117720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.092 [2024-10-01 20:39:04.363885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:40:12.092 [2024-10-01 20:39:04.363905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:117752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.092 [2024-10-01 20:39:04.363919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:40:12.092 [2024-10-01 20:39:04.363939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:118128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.092 [2024-10-01 20:39:04.363953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:40:12.092 [2024-10-01 20:39:04.363973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:118144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.092 [2024-10-01 20:39:04.363987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:40:12.092 [2024-10-01 20:39:04.364007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:118160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.092 [2024-10-01 20:39:04.364021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:40:12.092 [2024-10-01 20:39:04.364041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:117584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.092 [2024-10-01 20:39:04.364055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:40:12.092 [2024-10-01 20:39:04.364075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:117616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.092 [2024-10-01 20:39:04.364089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:40:12.092 [2024-10-01 20:39:04.364123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:117648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.092 [2024-10-01 20:39:04.364137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:40:12.092 [2024-10-01 20:39:04.364185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:117680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.092 [2024-10-01 20:39:04.364200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:40:12.092 [2024-10-01 20:39:04.364219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:117712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.092 [2024-10-01 20:39:04.364242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:40:12.092 [2024-10-01 20:39:04.364263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:117744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.092 [2024-10-01 20:39:04.364277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:40:12.092 [2024-10-01 20:39:04.364297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:117312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.092 [2024-10-01 20:39:04.364311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:40:12.092 [2024-10-01 20:39:04.364329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.092 [2024-10-01 20:39:04.364343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:40:12.092 [2024-10-01 20:39:04.364362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:117784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.092 [2024-10-01 20:39:04.364375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:40:12.092 [2024-10-01 20:39:04.364394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.092 [2024-10-01 20:39:04.364407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:40:12.092 [2024-10-01 20:39:04.364426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.092 [2024-10-01 20:39:04.364439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:40:12.092 [2024-10-01 20:39:04.365207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:117880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.093 [2024-10-01 20:39:04.365235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:40:12.093 [2024-10-01 20:39:04.365261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.093 [2024-10-01 20:39:04.365278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:40:12.093 [2024-10-01 20:39:04.365297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.093 [2024-10-01 20:39:04.365312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:40:12.093 [2024-10-01 20:39:04.365332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.093 [2024-10-01 20:39:04.365346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:40:12.093 [2024-10-01 20:39:04.365366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:117400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.093 [2024-10-01 20:39:04.365379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:40:12.093 [2024-10-01 20:39:04.365399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.093 [2024-10-01 20:39:04.365413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:40:12.093 [2024-10-01 20:39:04.365821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.093 [2024-10-01 20:39:04.365846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:12.093 [2024-10-01 20:39:04.365870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:117824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.093 [2024-10-01 20:39:04.365885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:40:12.093 [2024-10-01 20:39:04.365905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:117856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.093 [2024-10-01 20:39:04.365919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:40:12.093 [2024-10-01 20:39:04.365937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.093 [2024-10-01 20:39:04.365951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:40:12.093 [2024-10-01 20:39:04.365971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:117920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.093 [2024-10-01 20:39:04.365984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:40:12.093 [2024-10-01 20:39:04.366004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:117952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.093 [2024-10-01 20:39:04.366018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:40:12.093 [2024-10-01 20:39:04.366037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:118176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.093 [2024-10-01 20:39:04.366050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:40:12.093 [2024-10-01 20:39:04.366069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.093 [2024-10-01 20:39:04.366083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:40:12.093 [2024-10-01 20:39:04.366102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.093 [2024-10-01 20:39:04.366115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:40:12.093 [2024-10-01 20:39:04.366134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:118224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.093 [2024-10-01 20:39:04.366165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:40:12.093 [2024-10-01 20:39:04.366187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.093 [2024-10-01 20:39:04.366202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:40:12.093 [2024-10-01 20:39:04.366221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.093 [2024-10-01 20:39:04.366234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:40:12.093 [2024-10-01 20:39:04.366265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.093 [2024-10-01 20:39:04.366280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:40:12.093 [2024-10-01 20:39:04.367004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:118288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.093 [2024-10-01 20:39:04.367029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:40:12.093 [2024-10-01 20:39:04.367054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:118304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.093 [2024-10-01 20:39:04.367071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:40:12.093 [2024-10-01 20:39:04.367091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:117960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.093 [2024-10-01 20:39:04.367105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:40:12.093 [2024-10-01 20:39:04.367143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:117992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.093 [2024-10-01 20:39:04.367157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:40:12.093 [2024-10-01 20:39:04.367177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.093 [2024-10-01 20:39:04.367190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:40:12.093 [2024-10-01 20:39:04.367209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.093 [2024-10-01 20:39:04.367223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:40:12.093 [2024-10-01 20:39:04.367257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.093 [2024-10-01 20:39:04.367271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:40:12.093 [2024-10-01 20:39:04.367290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.093 [2024-10-01 20:39:04.367304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:40:12.093 [2024-10-01 20:39:04.367325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.093 [2024-10-01 20:39:04.367338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:40:12.093 [2024-10-01 20:39:04.367358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.093 [2024-10-01 20:39:04.367371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:40:12.093 [2024-10-01 20:39:04.367391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:117496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.094 [2024-10-01 20:39:04.367404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:40:12.094 [2024-10-01 20:39:04.367423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:117560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.094 [2024-10-01 20:39:04.367448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:40:12.094 [2024-10-01 20:39:04.367470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:117176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.094 [2024-10-01 20:39:04.367484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:40:12.094 [2024-10-01 20:39:04.367503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.094 [2024-10-01 20:39:04.367517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:40:12.094 [2024-10-01 20:39:04.367536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.094 [2024-10-01 20:39:04.367550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:40:12.094 [2024-10-01 20:39:04.367569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:117624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.094 [2024-10-01 20:39:04.367583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:40:12.094 [2024-10-01 20:39:04.367602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:117688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.094 [2024-10-01 20:39:04.367616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:40:12.094 [2024-10-01 20:39:04.367636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:117752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.094 [2024-10-01 20:39:04.367649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:40:12.094 [2024-10-01 20:39:04.367682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:118144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.094 [2024-10-01 20:39:04.367695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:40:12.094 [2024-10-01 20:39:04.367714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:117584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.094 [2024-10-01 20:39:04.367727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:12.094 [2024-10-01 20:39:04.367747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:117648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.094 [2024-10-01 20:39:04.367760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:40:12.094 [2024-10-01 20:39:04.367807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:117712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.094 [2024-10-01 20:39:04.367823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:40:12.094 [2024-10-01 20:39:04.367843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:117312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.094 [2024-10-01 20:39:04.367857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:40:12.094 [2024-10-01 20:39:04.367878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:117784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.094 [2024-10-01 20:39:04.367899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:40:12.094 [2024-10-01 20:39:04.367921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.094 [2024-10-01 20:39:04.367935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:40:12.094 [2024-10-01 20:39:04.367956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:118032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.094 [2024-10-01 20:39:04.367970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:40:12.094 [2024-10-01 20:39:04.367990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:118064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.094 [2024-10-01 20:39:04.368004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:40:12.094 [2024-10-01 20:39:04.368024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.094 [2024-10-01 20:39:04.368039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:40:12.094 [2024-10-01 20:39:04.368059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:117544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.094 [2024-10-01 20:39:04.368073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:40:12.094 [2024-10-01 20:39:04.369302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.094 [2024-10-01 20:39:04.369329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:40:12.094 [2024-10-01 20:39:04.369355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:117336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.094 [2024-10-01 20:39:04.369371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:40:12.094 [2024-10-01 20:39:04.369392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.094 [2024-10-01 20:39:04.369406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:40:12.094 [2024-10-01 20:39:04.369425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:118112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.094 [2024-10-01 20:39:04.369439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:40:12.094 [2024-10-01 20:39:04.369459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:118152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.094 [2024-10-01 20:39:04.369473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:40:12.094 [2024-10-01 20:39:04.369493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:117600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.094 [2024-10-01 20:39:04.369506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:40:12.094 [2024-10-01 20:39:04.369540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:117664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.094 [2024-10-01 20:39:04.369597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:40:12.094 [2024-10-01 20:39:04.369640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:117728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.094 [2024-10-01 20:39:04.369655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:40:12.094 [2024-10-01 20:39:04.369677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:117824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.094 [2024-10-01 20:39:04.369692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:40:12.094 [2024-10-01 20:39:04.369714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:117888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.094 [2024-10-01 20:39:04.369730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:40:12.094 [2024-10-01 20:39:04.369752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:117952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.095 [2024-10-01 20:39:04.369768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:40:12.095 [2024-10-01 20:39:04.369792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.095 [2024-10-01 20:39:04.369807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:40:12.095 [2024-10-01 20:39:04.369829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:118224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.095 [2024-10-01 20:39:04.369845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:40:12.095 [2024-10-01 20:39:04.369867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.095 [2024-10-01 20:39:04.369882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:40:12.095 [2024-10-01 20:39:04.369904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:117800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.095 [2024-10-01 20:39:04.369919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:40:12.095 [2024-10-01 20:39:04.369941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:117864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.095 [2024-10-01 20:39:04.369957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:40:12.095 [2024-10-01 20:39:04.369979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:117928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.095 [2024-10-01 20:39:04.369994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:40:12.095 [2024-10-01 20:39:04.370016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:118328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.095 [2024-10-01 20:39:04.370058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:40:12.095 [2024-10-01 20:39:04.370102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:118344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.095 [2024-10-01 20:39:04.370115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:40:12.095 [2024-10-01 20:39:04.370171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.095 [2024-10-01 20:39:04.370187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:40:12.095 [2024-10-01 20:39:04.370209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:118376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.095 [2024-10-01 20:39:04.370225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:40:12.095 [2024-10-01 20:39:04.370247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:118392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.095 [2024-10-01 20:39:04.370280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:40:12.095 [2024-10-01 20:39:04.370309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:118304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.095 [2024-10-01 20:39:04.370326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:12.095 [2024-10-01 20:39:04.370348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:117992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.095 [2024-10-01 20:39:04.370364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:40:12.095 [2024-10-01 20:39:04.370386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.095 [2024-10-01 20:39:04.370402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:40:12.095 [2024-10-01 20:39:04.370424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.095 [2024-10-01 20:39:04.370439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:40:12.095 [2024-10-01 20:39:04.370491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.095 [2024-10-01 20:39:04.370536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:40:12.095 [2024-10-01 20:39:04.371416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:117560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.095 [2024-10-01 20:39:04.371446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:40:12.095 [2024-10-01 20:39:04.371490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.095 [2024-10-01 20:39:04.371537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:40:12.095 [2024-10-01 20:39:04.371557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:117624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.095 [2024-10-01 20:39:04.371571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:40:12.095 [2024-10-01 20:39:04.371590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:117752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.095 [2024-10-01 20:39:04.371603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:40:12.095 [2024-10-01 20:39:04.371635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:117584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.095 [2024-10-01 20:39:04.371650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:40:12.095 [2024-10-01 20:39:04.371669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:117712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.095 [2024-10-01 20:39:04.371682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:40:12.095 [2024-10-01 20:39:04.371701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:117784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.095 [2024-10-01 20:39:04.371715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:40:12.095 [2024-10-01 20:39:04.371733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:118032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.095 [2024-10-01 20:39:04.371747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:40:12.095 [2024-10-01 20:39:04.371766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.095 [2024-10-01 20:39:04.371809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:40:12.095 [2024-10-01 20:39:04.372165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:118168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.095 [2024-10-01 20:39:04.372193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:40:12.095 [2024-10-01 20:39:04.372221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:118200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.095 [2024-10-01 20:39:04.372239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:40:12.095 [2024-10-01 20:39:04.372262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:118232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.096 [2024-10-01 20:39:04.372278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:40:12.096 [2024-10-01 20:39:04.372300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:118264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.096 [2024-10-01 20:39:04.372316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:40:12.096 [2024-10-01 20:39:04.372338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:118408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.096 [2024-10-01 20:39:04.372353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:40:12.096 [2024-10-01 20:39:04.372375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:118424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.096 [2024-10-01 20:39:04.372391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:40:12.096 [2024-10-01 20:39:04.372413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:118440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.096 [2024-10-01 20:39:04.372443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:40:12.096 [2024-10-01 20:39:04.372478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:118456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.096 [2024-10-01 20:39:04.372514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:40:12.096 [2024-10-01 20:39:04.372535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:118472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.096 [2024-10-01 20:39:04.372549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:40:12.096 [2024-10-01 20:39:04.372568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:118488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.096 [2024-10-01 20:39:04.372582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:40:12.096 [2024-10-01 20:39:04.372600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:118296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.096 [2024-10-01 20:39:04.372614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:40:12.096 [2024-10-01 20:39:04.372633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:117968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.096 [2024-10-01 20:39:04.372646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:40:12.096 [2024-10-01 20:39:04.372665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:118024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.096 [2024-10-01 20:39:04.372678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:40:12.096 [2024-10-01 20:39:04.372697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.096 [2024-10-01 20:39:04.372710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:40:12.096 [2024-10-01 20:39:04.372729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:117336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.096 [2024-10-01 20:39:04.372742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:40:12.096 [2024-10-01 20:39:04.372761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:118112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.096 [2024-10-01 20:39:04.372775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:12.096 [2024-10-01 20:39:04.372794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:117600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.096 [2024-10-01 20:39:04.372807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:12.096 [2024-10-01 20:39:04.372825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:117728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.096 [2024-10-01 20:39:04.372855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.096 [2024-10-01 20:39:04.372875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:117888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.096 [2024-10-01 20:39:04.372889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:12.096 [2024-10-01 20:39:04.372908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.096 [2024-10-01 20:39:04.372929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:40:12.096 [2024-10-01 20:39:04.372950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.096 [2024-10-01 20:39:04.372964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:40:12.096 [2024-10-01 20:39:04.372983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:117864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.096 [2024-10-01 20:39:04.372997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:40:12.096 [2024-10-01 20:39:04.373017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:118328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.096 [2024-10-01 20:39:04.373030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:40:12.096 [2024-10-01 20:39:04.373050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:118360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.096 [2024-10-01 20:39:04.373064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:40:12.096 [2024-10-01 20:39:04.373083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:118392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.096 [2024-10-01 20:39:04.373097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:40:12.096 [2024-10-01 20:39:04.373132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:117992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.096 [2024-10-01 20:39:04.373163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:40:12.096 [2024-10-01 20:39:04.373199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.096 [2024-10-01 20:39:04.373219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:40:12.096 [2024-10-01 20:39:04.374296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:118496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.096 [2024-10-01 20:39:04.374325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:40:12.096 [2024-10-01 20:39:04.374354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:118512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.096 [2024-10-01 20:39:04.374372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:40:12.096 [2024-10-01 20:39:04.374395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:118528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.096 [2024-10-01 20:39:04.374411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:40:12.096 [2024-10-01 20:39:04.374433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:118104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.097 [2024-10-01 20:39:04.374449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:40:12.097 [2024-10-01 20:39:04.374470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:118160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.097 [2024-10-01 20:39:04.374497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:40:12.097 [2024-10-01 20:39:04.374550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:117680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.097 [2024-10-01 20:39:04.374565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:40:12.097 [2024-10-01 20:39:04.374586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:117816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.097 [2024-10-01 20:39:04.374600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:40:12.097 [2024-10-01 20:39:04.374621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.097 [2024-10-01 20:39:04.374636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:40:12.097 [2024-10-01 20:39:04.374657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:117752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.097 [2024-10-01 20:39:04.374671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:40:12.097 [2024-10-01 20:39:04.374692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:117712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.097 [2024-10-01 20:39:04.374706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:40:12.097 [2024-10-01 20:39:04.374727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:118032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.097 [2024-10-01 20:39:04.374741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:40:12.097 [2024-10-01 20:39:04.374762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:117880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.097 [2024-10-01 20:39:04.374776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:40:12.097 [2024-10-01 20:39:04.374797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:118200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.097 [2024-10-01 20:39:04.374812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:40:12.097 [2024-10-01 20:39:04.374832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:118264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.097 [2024-10-01 20:39:04.374846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:40:12.097 [2024-10-01 20:39:04.374881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:118424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.097 [2024-10-01 20:39:04.374895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:40:12.097 [2024-10-01 20:39:04.374915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:118456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.097 [2024-10-01 20:39:04.374929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:40:12.097 [2024-10-01 20:39:04.374949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:118488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.097 [2024-10-01 20:39:04.374963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:40:12.097 [2024-10-01 20:39:04.374992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:117968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.097 [2024-10-01 20:39:04.375007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:40:12.097 [2024-10-01 20:39:04.375028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.097 [2024-10-01 20:39:04.375042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:40:12.097 [2024-10-01 20:39:04.375061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:118112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.097 [2024-10-01 20:39:04.375076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:40:12.097 [2024-10-01 20:39:04.375095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:117728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.097 [2024-10-01 20:39:04.375109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:40:12.097 [2024-10-01 20:39:04.375163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.097 [2024-10-01 20:39:04.375178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:40:12.097 [2024-10-01 20:39:04.375225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:117864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.097 [2024-10-01 20:39:04.375242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:40:12.097 [2024-10-01 20:39:04.375265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:118360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.097 [2024-10-01 20:39:04.375281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:12.097 [2024-10-01 20:39:04.375304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:117992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.097 [2024-10-01 20:39:04.375319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:40:12.097 [2024-10-01 20:39:04.377660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:118176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.097 [2024-10-01 20:39:04.377688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:40:12.097 [2024-10-01 20:39:04.377713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:118240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.097 [2024-10-01 20:39:04.377729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:40:12.097 [2024-10-01 20:39:04.377748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:118544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.097 [2024-10-01 20:39:04.377763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:40:12.097 [2024-10-01 20:39:04.377781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:118560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.097 [2024-10-01 20:39:04.377794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:40:12.097 [2024-10-01 20:39:04.377825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:118576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.097 [2024-10-01 20:39:04.377840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:40:12.097 [2024-10-01 20:39:04.377858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:118320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.097 [2024-10-01 20:39:04.377871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:40:12.097 [2024-10-01 20:39:04.377890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:118352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.098 [2024-10-01 20:39:04.377903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:40:12.098 [2024-10-01 20:39:04.377921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:118384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.098 [2024-10-01 20:39:04.377934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:40:12.098 [2024-10-01 20:39:04.377953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:117984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.098 [2024-10-01 20:39:04.377966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:40:12.098 [2024-10-01 20:39:04.377984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:118040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.098 [2024-10-01 20:39:04.377997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:40:12.098 [2024-10-01 20:39:04.378015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:118120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.098 [2024-10-01 20:39:04.378029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:40:12.098 [2024-10-01 20:39:04.378047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:117648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.098 [2024-10-01 20:39:04.378060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:40:12.098 [2024-10-01 20:39:04.378078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:118512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.098 [2024-10-01 20:39:04.378091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:40:12.098 [2024-10-01 20:39:04.378110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:118104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.098 [2024-10-01 20:39:04.378140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:40:12.098 [2024-10-01 20:39:04.378179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:117680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.098 [2024-10-01 20:39:04.378216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:40:12.098 [2024-10-01 20:39:04.378240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.098 [2024-10-01 20:39:04.378256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:40:12.098 [2024-10-01 20:39:04.378288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:117712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.098 [2024-10-01 20:39:04.378306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:40:12.098 [2024-10-01 20:39:04.378328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:117880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.098 [2024-10-01 20:39:04.378344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:40:12.098 [2024-10-01 20:39:04.378366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:118264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.098 [2024-10-01 20:39:04.378381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:40:12.098 [2024-10-01 20:39:04.378403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.098 [2024-10-01 20:39:04.378419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:40:12.098 [2024-10-01 20:39:04.378441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:117968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.098 [2024-10-01 20:39:04.378456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:40:12.098 [2024-10-01 20:39:04.378508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:118112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.098 [2024-10-01 20:39:04.378523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:40:12.098 [2024-10-01 20:39:04.378556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.098 [2024-10-01 20:39:04.378584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:40:12.098 [2024-10-01 20:39:04.378602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:118360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.098 [2024-10-01 20:39:04.378615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:40:12.098 [2024-10-01 20:39:04.378634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:117848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.098 [2024-10-01 20:39:04.378647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:40:12.098 [2024-10-01 20:39:04.378665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:118416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.098 [2024-10-01 20:39:04.378678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:40:12.098 [2024-10-01 20:39:04.378696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:118448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.098 [2024-10-01 20:39:04.378709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:40:12.098 [2024-10-01 20:39:04.378728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:118480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.098 [2024-10-01 20:39:04.378741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:40:12.098 [2024-10-01 20:39:04.379307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:118592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.098 [2024-10-01 20:39:04.379346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:40:12.098 [2024-10-01 20:39:04.379374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:118608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.098 [2024-10-01 20:39:04.379392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:40:12.098 [2024-10-01 20:39:04.379413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:118624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.099 [2024-10-01 20:39:04.379428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:12.099 [2024-10-01 20:39:04.379448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:118640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.099 [2024-10-01 20:39:04.379463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:40:12.099 [2024-10-01 20:39:04.379498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:118656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.099 [2024-10-01 20:39:04.379542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:40:12.099 [2024-10-01 20:39:04.379576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:118672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.099 [2024-10-01 20:39:04.379589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:40:12.099 [2024-10-01 20:39:04.379607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:118688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.099 [2024-10-01 20:39:04.379621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:40:12.099 [2024-10-01 20:39:04.379639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:118704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.099 [2024-10-01 20:39:04.379652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:40:12.099 [2024-10-01 20:39:04.379670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:118720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.099 [2024-10-01 20:39:04.379684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:40:12.099 [2024-10-01 20:39:04.379702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:118736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.099 [2024-10-01 20:39:04.379716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:40:12.099 [2024-10-01 20:39:04.379735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:118752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.099 [2024-10-01 20:39:04.379749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:40:12.099 [2024-10-01 20:39:04.379776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:118768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.099 [2024-10-01 20:39:04.379825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:40:12.099 [2024-10-01 20:39:04.379847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:118784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.099 [2024-10-01 20:39:04.379871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:40:12.099 [2024-10-01 20:39:04.379895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:118224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.099 [2024-10-01 20:39:04.379910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:40:12.099 [2024-10-01 20:39:04.380416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:118376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.099 [2024-10-01 20:39:04.380457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:40:12.099 [2024-10-01 20:39:04.380496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:118072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.099 [2024-10-01 20:39:04.380511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:40:12.099 [2024-10-01 20:39:04.380530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:118520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.099 [2024-10-01 20:39:04.380544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:40:12.099 [2024-10-01 20:39:04.380562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.099 [2024-10-01 20:39:04.380575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:40:12.099 [2024-10-01 20:39:04.380593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:117784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.099 [2024-10-01 20:39:04.380606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:40:12.099 [2024-10-01 20:39:04.380625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:118240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.099 [2024-10-01 20:39:04.380638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:40:12.099 [2024-10-01 20:39:04.380656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:118560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.099 [2024-10-01 20:39:04.380669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:40:12.099 [2024-10-01 20:39:04.380687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:118320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.099 [2024-10-01 20:39:04.380701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:40:12.099 [2024-10-01 20:39:04.380719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:118384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.099 [2024-10-01 20:39:04.380732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:40:12.099 [2024-10-01 20:39:04.380750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:118040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.099 [2024-10-01 20:39:04.380763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:40:12.099 [2024-10-01 20:39:04.380781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:117648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.099 [2024-10-01 20:39:04.380795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:40:12.099 [2024-10-01 20:39:04.380825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:118104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.099 [2024-10-01 20:39:04.380839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:40:12.099 [2024-10-01 20:39:04.380858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.099 [2024-10-01 20:39:04.380871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:40:12.099 [2024-10-01 20:39:04.380889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:117880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.099 [2024-10-01 20:39:04.380902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:40:12.099 [2024-10-01 20:39:04.380921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:118456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.099 [2024-10-01 20:39:04.380934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:40:12.099 [2024-10-01 20:39:04.380952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:118112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.099 [2024-10-01 20:39:04.380965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:40:12.099 [2024-10-01 20:39:04.380984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:118360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.099 [2024-10-01 20:39:04.380997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:40:12.099 [2024-10-01 20:39:04.381015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:118416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.099 [2024-10-01 20:39:04.381028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:40:12.099 [2024-10-01 20:39:04.381047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:118480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.099 [2024-10-01 20:39:04.381060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:40:12.099 [2024-10-01 20:39:04.381462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:118440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.100 [2024-10-01 20:39:04.381518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:40:12.100 [2024-10-01 20:39:04.381557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:118256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.100 [2024-10-01 20:39:04.381572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:12.100 [2024-10-01 20:39:04.381591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:118392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.100 [2024-10-01 20:39:04.381605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:40:12.100 [2024-10-01 20:39:04.381623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:118608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.100 [2024-10-01 20:39:04.381636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:40:12.100 [2024-10-01 20:39:04.381666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:118640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.100 [2024-10-01 20:39:04.381681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:40:12.100 [2024-10-01 20:39:04.381699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:118672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.100 [2024-10-01 20:39:04.381713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:40:12.100 [2024-10-01 20:39:04.381731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:118704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.100 [2024-10-01 20:39:04.381744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:40:12.100 [2024-10-01 20:39:04.381762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:118736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.100 [2024-10-01 20:39:04.381775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:40:12.100 [2024-10-01 20:39:04.381793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:118768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.100 [2024-10-01 20:39:04.381806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:40:12.100 [2024-10-01 20:39:04.381825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:118224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.100 [2024-10-01 20:39:04.381838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:40:12.100 [2024-10-01 20:39:04.383416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:118568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.100 [2024-10-01 20:39:04.383451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:40:12.100 [2024-10-01 20:39:04.383527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:118496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.100 [2024-10-01 20:39:04.383557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:40:12.100 [2024-10-01 20:39:04.383577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:118072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.100 [2024-10-01 20:39:04.383591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:40:12.100 [2024-10-01 20:39:04.383609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.100 [2024-10-01 20:39:04.383623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:40:12.100 [2024-10-01 20:39:04.383641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:118240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.100 [2024-10-01 20:39:04.383654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:40:12.100 [2024-10-01 20:39:04.383672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:118320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.100 [2024-10-01 20:39:04.383685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:40:12.100 [2024-10-01 20:39:04.383715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:118040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.100 [2024-10-01 20:39:04.383730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:40:12.100 [2024-10-01 20:39:04.383748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:118104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.100 [2024-10-01 20:39:04.383762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:40:12.100 [2024-10-01 20:39:04.383825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:117880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.100 [2024-10-01 20:39:04.383841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:40:12.100 [2024-10-01 20:39:04.383861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:118112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.100 [2024-10-01 20:39:04.383876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:40:12.100 [2024-10-01 20:39:04.383896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:118416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.100 [2024-10-01 20:39:04.383911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:40:12.100 [2024-10-01 20:39:04.383932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:118424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.100 [2024-10-01 20:39:04.383946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:40:12.100 [2024-10-01 20:39:04.383967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:118256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.100 [2024-10-01 20:39:04.383982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:40:12.100 [2024-10-01 20:39:04.384002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:118608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.100 [2024-10-01 20:39:04.384017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:40:12.100 [2024-10-01 20:39:04.384038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:118672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.100 [2024-10-01 20:39:04.384052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:40:12.100 [2024-10-01 20:39:04.384073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:118736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.100 [2024-10-01 20:39:04.384088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:40:12.100 [2024-10-01 20:39:04.384123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:118224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.100 [2024-10-01 20:39:04.384138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:40:12.100 [2024-10-01 20:39:04.385639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:118808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.100 [2024-10-01 20:39:04.385673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:40:12.100 [2024-10-01 20:39:04.385718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:118824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.100 [2024-10-01 20:39:04.385746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:40:12.100 [2024-10-01 20:39:04.385767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:118840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.100 [2024-10-01 20:39:04.385780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:40:12.100 [2024-10-01 20:39:04.385798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:118856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.101 [2024-10-01 20:39:04.385811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:12.101 [2024-10-01 20:39:04.385830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:118872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.101 [2024-10-01 20:39:04.385842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:12.101 [2024-10-01 20:39:04.385861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:118888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.101 [2024-10-01 20:39:04.385874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.101 [2024-10-01 20:39:04.385892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:118904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.101 [2024-10-01 20:39:04.385905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:12.101 [2024-10-01 20:39:04.385923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:118920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.101 [2024-10-01 20:39:04.385936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:40:12.101 [2024-10-01 20:39:04.385954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:118936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.101 [2024-10-01 20:39:04.385967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:40:12.101 [2024-10-01 20:39:04.385985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:118952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.101 [2024-10-01 20:39:04.385998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:40:12.101 [2024-10-01 20:39:04.386016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:118968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.101 [2024-10-01 20:39:04.386029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:40:12.101 [2024-10-01 20:39:04.386047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:118984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.101 [2024-10-01 20:39:04.386061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:40:12.101 [2024-10-01 20:39:04.386079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:119000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.101 [2024-10-01 20:39:04.386092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:40:12.101 [2024-10-01 20:39:04.386110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:119016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.101 [2024-10-01 20:39:04.386161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:40:12.101 [2024-10-01 20:39:04.386184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:118600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.101 [2024-10-01 20:39:04.386199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:40:12.101 [2024-10-01 20:39:04.386219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:118632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.101 [2024-10-01 20:39:04.386248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:40:12.101 [2024-10-01 20:39:04.386268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:118664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.101 [2024-10-01 20:39:04.386282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:40:12.101 [2024-10-01 20:39:04.386302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:118696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.101 [2024-10-01 20:39:04.386316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:40:12.101 [2024-10-01 20:39:04.386335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:118728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.101 [2024-10-01 20:39:04.386349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:40:12.101 [2024-10-01 20:39:04.386370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:118760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.101 [2024-10-01 20:39:04.386383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:40:12.101 [2024-10-01 20:39:04.386403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:118792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.101 [2024-10-01 20:39:04.386417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:40:12.101 [2024-10-01 20:39:04.386437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:118496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.101 [2024-10-01 20:39:04.386451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:40:12.101 [2024-10-01 20:39:04.386471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.101 [2024-10-01 20:39:04.386500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:40:12.101 [2024-10-01 20:39:04.386533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:118320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.101 [2024-10-01 20:39:04.386547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:40:12.101 [2024-10-01 20:39:04.386566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:118104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.101 [2024-10-01 20:39:04.386593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:40:12.101 [2024-10-01 20:39:04.386611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:118112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.101 [2024-10-01 20:39:04.386624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:40:12.101 [2024-10-01 20:39:04.386650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:118424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.101 [2024-10-01 20:39:04.386664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:40:12.101 [2024-10-01 20:39:04.386683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:118608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.101 [2024-10-01 20:39:04.386697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:40:12.101 [2024-10-01 20:39:04.386990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:118736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.101 [2024-10-01 20:39:04.387013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:40:12.101 [2024-10-01 20:39:04.387036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:118544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.101 [2024-10-01 20:39:04.387051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:40:12.101 [2024-10-01 20:39:04.387070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:118512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.101 [2024-10-01 20:39:04.387083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:40:12.101 [2024-10-01 20:39:04.387102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:118192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.101 [2024-10-01 20:39:04.387131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:40:12.101 [2024-10-01 20:39:04.387150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:118624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.101 [2024-10-01 20:39:04.387164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:40:12.102 [2024-10-01 20:39:04.387195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:118688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.102 [2024-10-01 20:39:04.387213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:40:12.102 [2024-10-01 20:39:04.387233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:118752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.102 [2024-10-01 20:39:04.387247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:40:12.102 [2024-10-01 20:39:04.387267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:118560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.102 [2024-10-01 20:39:04.387280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:40:12.102 [2024-10-01 20:39:04.387299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:118456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.102 [2024-10-01 20:39:04.387312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:40:12.102 [2024-10-01 20:39:04.387330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:118640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.102 [2024-10-01 20:39:04.387344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:40:12.102 [2024-10-01 20:39:04.387373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:118768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.102 [2024-10-01 20:39:04.387388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:12.102 [2024-10-01 20:39:04.387407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:118816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.102 [2024-10-01 20:39:04.387420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:40:12.102 [2024-10-01 20:39:04.387439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:118848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.102 [2024-10-01 20:39:04.387452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:40:12.102 [2024-10-01 20:39:04.387486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:118880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.102 [2024-10-01 20:39:04.387499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:40:12.102 [2024-10-01 20:39:04.387517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:118912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.102 [2024-10-01 20:39:04.387530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:40:12.102 [2024-10-01 20:39:04.387549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:118944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.102 [2024-10-01 20:39:04.387562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:40:12.102 [2024-10-01 20:39:04.390191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:118824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.102 [2024-10-01 20:39:04.390229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:40:12.102 [2024-10-01 20:39:04.390275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:118856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.102 [2024-10-01 20:39:04.390292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:40:12.102 [2024-10-01 20:39:04.390311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:118888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.102 [2024-10-01 20:39:04.390325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:40:12.102 [2024-10-01 20:39:04.390343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:118920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.102 [2024-10-01 20:39:04.390357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:40:12.102 [2024-10-01 20:39:04.390376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:118952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.102 [2024-10-01 20:39:04.390389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:40:12.102 [2024-10-01 20:39:04.390408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:118984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.102 [2024-10-01 20:39:04.390421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:40:12.102 [2024-10-01 20:39:04.390455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:119016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.102 [2024-10-01 20:39:04.390470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:40:12.102 [2024-10-01 20:39:04.390504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:118632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.102 [2024-10-01 20:39:04.390517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:40:12.102 [2024-10-01 20:39:04.390535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:118696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.102 [2024-10-01 20:39:04.390548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:40:12.102 [2024-10-01 20:39:04.390567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:118760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.102 [2024-10-01 20:39:04.390580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:40:12.102 [2024-10-01 20:39:04.390598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:118496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.102 [2024-10-01 20:39:04.390611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:40:12.102 [2024-10-01 20:39:04.390629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:118320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.102 [2024-10-01 20:39:04.390642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:40:12.102 [2024-10-01 20:39:04.390660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:118112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.102 [2024-10-01 20:39:04.390673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:40:12.102 [2024-10-01 20:39:04.390692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:118608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.102 [2024-10-01 20:39:04.390705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:40:12.102 [2024-10-01 20:39:04.391760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:119040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.102 [2024-10-01 20:39:04.391820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:40:12.102 [2024-10-01 20:39:04.391867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:119056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.102 [2024-10-01 20:39:04.391883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:40:12.102 [2024-10-01 20:39:04.391904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:119072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.102 [2024-10-01 20:39:04.391919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:40:12.102 [2024-10-01 20:39:04.391939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:119088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.102 [2024-10-01 20:39:04.391953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:40:12.103 [2024-10-01 20:39:04.391973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:119104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.103 [2024-10-01 20:39:04.391999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:40:12.103 [2024-10-01 20:39:04.392021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:119120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.103 [2024-10-01 20:39:04.392035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:40:12.103 [2024-10-01 20:39:04.392055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:119136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.103 [2024-10-01 20:39:04.392069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:40:12.103 [2024-10-01 20:39:04.392089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:119152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.103 [2024-10-01 20:39:04.392118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:40:12.103 [2024-10-01 20:39:04.392137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:119168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.103 [2024-10-01 20:39:04.392180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:40:12.103 [2024-10-01 20:39:04.392200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:119184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.103 [2024-10-01 20:39:04.392214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:40:12.103 [2024-10-01 20:39:04.392233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:119200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.103 [2024-10-01 20:39:04.392247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:40:12.103 [2024-10-01 20:39:04.392266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:119216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.103 [2024-10-01 20:39:04.392279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:40:12.103 [2024-10-01 20:39:04.392298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:119232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.103 [2024-10-01 20:39:04.392312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:12.103 [2024-10-01 20:39:04.392330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:119248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.103 [2024-10-01 20:39:04.392344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:40:12.103 [2024-10-01 20:39:04.392363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:119264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.103 [2024-10-01 20:39:04.392375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:40:12.103 [2024-10-01 20:39:04.392394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:119280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.103 [2024-10-01 20:39:04.392407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:40:12.103 [2024-10-01 20:39:04.392426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:119296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.103 [2024-10-01 20:39:04.392462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:40:12.103 [2024-10-01 20:39:04.392482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:119312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.103 [2024-10-01 20:39:04.392495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:40:12.103 [2024-10-01 20:39:04.392514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:119328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.103 [2024-10-01 20:39:04.392527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:40:12.103 [2024-10-01 20:39:04.392545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:119344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.103 [2024-10-01 20:39:04.392558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:40:12.103 [2024-10-01 20:39:04.392576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:119360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.103 [2024-10-01 20:39:04.392589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:40:12.103 [2024-10-01 20:39:04.392607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:119376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.103 [2024-10-01 20:39:04.392620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:40:12.103 [2024-10-01 20:39:04.392638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:119392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.103 [2024-10-01 20:39:04.392651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:40:12.103 [2024-10-01 20:39:04.392670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:119408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.103 [2024-10-01 20:39:04.392682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:40:12.103 [2024-10-01 20:39:04.392701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:119424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.103 [2024-10-01 20:39:04.392714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:40:12.103 [2024-10-01 20:39:04.392732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:118976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.103 [2024-10-01 20:39:04.392745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:40:12.103 [2024-10-01 20:39:04.392763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:119008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.103 [2024-10-01 20:39:04.392776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:40:12.103 [2024-10-01 20:39:04.392794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:118736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.103 [2024-10-01 20:39:04.392807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:40:12.103 [2024-10-01 20:39:04.392826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:118512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.103 [2024-10-01 20:39:04.392839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:40:12.103 [2024-10-01 20:39:04.392864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:118624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.103 [2024-10-01 20:39:04.392878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:40:12.103 [2024-10-01 20:39:04.392896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:118752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.103 [2024-10-01 20:39:04.392909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:40:12.103 [2024-10-01 20:39:04.392927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:118456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.103 [2024-10-01 20:39:04.392941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:40:12.104 [2024-10-01 20:39:04.392959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:118768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.104 [2024-10-01 20:39:04.392972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:40:12.104 [2024-10-01 20:39:04.392990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.104 [2024-10-01 20:39:04.393003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:40:12.104 [2024-10-01 20:39:04.393021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:118912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.104 [2024-10-01 20:39:04.393034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:40:12.104 [2024-10-01 20:39:04.393053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:118672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.104 [2024-10-01 20:39:04.393065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:40:12.104 [2024-10-01 20:39:04.393083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:118856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.104 [2024-10-01 20:39:04.393096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:40:12.104 [2024-10-01 20:39:04.393115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:118920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.104 [2024-10-01 20:39:04.393172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:40:12.104 [2024-10-01 20:39:04.393193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:118984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.104 [2024-10-01 20:39:04.393208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:40:12.104 [2024-10-01 20:39:04.393227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:118632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.104 [2024-10-01 20:39:04.393242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:40:12.104 [2024-10-01 20:39:04.393261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:118760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.104 [2024-10-01 20:39:04.393274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:40:12.104 [2024-10-01 20:39:04.393302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:118320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.104 [2024-10-01 20:39:04.393317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:40:12.104 [2024-10-01 20:39:04.393337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:118608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.104 [2024-10-01 20:39:04.393351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:40:12.104 [2024-10-01 20:39:04.394352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:119440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.104 [2024-10-01 20:39:04.394402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:40:12.104 [2024-10-01 20:39:04.394448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:119456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.104 [2024-10-01 20:39:04.394464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:12.104 [2024-10-01 20:39:04.394485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:119472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.104 [2024-10-01 20:39:04.394499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:40:12.104 [2024-10-01 20:39:04.394547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:119488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.104 [2024-10-01 20:39:04.394560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:40:12.104 [2024-10-01 20:39:04.394578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:119504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.104 [2024-10-01 20:39:04.394591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:40:12.104 [2024-10-01 20:39:04.394610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:119520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.104 [2024-10-01 20:39:04.394622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:40:12.104 [2024-10-01 20:39:04.394641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:118808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.104 [2024-10-01 20:39:04.394653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:40:12.104 [2024-10-01 20:39:04.394671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:118872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.104 [2024-10-01 20:39:04.394685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:40:12.104 [2024-10-01 20:39:04.394703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:118936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.104 [2024-10-01 20:39:04.394716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:40:12.104 [2024-10-01 20:39:04.394734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:119000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.104 [2024-10-01 20:39:04.394747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:40:12.104 [2024-10-01 20:39:04.396047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:119056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.104 [2024-10-01 20:39:04.396083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:40:12.104 [2024-10-01 20:39:04.396154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:119088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.104 [2024-10-01 20:39:04.396187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:40:12.104 [2024-10-01 20:39:04.396207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:119120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.104 [2024-10-01 20:39:04.396220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:40:12.105 [2024-10-01 20:39:04.396239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:119152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.105 [2024-10-01 20:39:04.396252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:40:12.105 [2024-10-01 20:39:04.396270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:119184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.105 [2024-10-01 20:39:04.396283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:40:12.105 [2024-10-01 20:39:04.396301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:119216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.105 [2024-10-01 20:39:04.396314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:40:12.105 [2024-10-01 20:39:04.396333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:119248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.105 [2024-10-01 20:39:04.396346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:40:12.105 [2024-10-01 20:39:04.396364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:119280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.105 [2024-10-01 20:39:04.396377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:40:12.105 [2024-10-01 20:39:04.396395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:119312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.105 [2024-10-01 20:39:04.396408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:40:12.105 [2024-10-01 20:39:04.396427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:119344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.105 [2024-10-01 20:39:04.396439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:40:12.105 [2024-10-01 20:39:04.396458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:119376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.105 [2024-10-01 20:39:04.396471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:40:12.105 [2024-10-01 20:39:04.396489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:119408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.105 [2024-10-01 20:39:04.396502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:40:12.105 [2024-10-01 20:39:04.396520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:118976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.105 [2024-10-01 20:39:04.396544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:40:12.105 [2024-10-01 20:39:04.396565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:118736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.105 [2024-10-01 20:39:04.396578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:40:12.105 [2024-10-01 20:39:04.396596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:118624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.105 [2024-10-01 20:39:04.396609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:40:12.105 [2024-10-01 20:39:04.396627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:118456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.105 [2024-10-01 20:39:04.396640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:40:12.105 [2024-10-01 20:39:04.396659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:118848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.105 [2024-10-01 20:39:04.396672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:40:12.105 [2024-10-01 20:39:04.396690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:118672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.105 [2024-10-01 20:39:04.396703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:40:12.105 [2024-10-01 20:39:04.396721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:118920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.105 [2024-10-01 20:39:04.396734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:40:12.105 [2024-10-01 20:39:04.396752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:118632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.105 [2024-10-01 20:39:04.396765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:40:12.105 [2024-10-01 20:39:04.396783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:118320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.105 [2024-10-01 20:39:04.396797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:12.105 [2024-10-01 20:39:04.396815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:119032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.105 [2024-10-01 20:39:04.396827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:12.105 [2024-10-01 20:39:04.396846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:119064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.105 [2024-10-01 20:39:04.396859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.105 [2024-10-01 20:39:04.396877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:119096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.105 [2024-10-01 20:39:04.396890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:12.105 [2024-10-01 20:39:04.396908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:119128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.105 [2024-10-01 20:39:04.396928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:40:12.105 [2024-10-01 20:39:04.396948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:119160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.105 [2024-10-01 20:39:04.396961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:40:12.105 [2024-10-01 20:39:04.396980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:119192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.105 [2024-10-01 20:39:04.396993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:40:12.105 [2024-10-01 20:39:04.397011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:119224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.105 [2024-10-01 20:39:04.397024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:40:12.105 [2024-10-01 20:39:04.397042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:119256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.105 [2024-10-01 20:39:04.397055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:40:12.105 [2024-10-01 20:39:04.397073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:119288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.105 [2024-10-01 20:39:04.397086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:40:12.106 [2024-10-01 20:39:04.397104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:119320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.106 [2024-10-01 20:39:04.397117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:40:12.106 [2024-10-01 20:39:04.397165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.106 [2024-10-01 20:39:04.397181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:40:12.106 [2024-10-01 20:39:04.397200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:119384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.106 [2024-10-01 20:39:04.397213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:40:12.106 [2024-10-01 20:39:04.397232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:119416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.106 [2024-10-01 20:39:04.397245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:40:12.106 [2024-10-01 20:39:04.397264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:119456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.106 [2024-10-01 20:39:04.397277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:40:12.106 [2024-10-01 20:39:04.397296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:119488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.106 [2024-10-01 20:39:04.397309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:40:12.106 [2024-10-01 20:39:04.397328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:119520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.106 [2024-10-01 20:39:04.397341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:40:12.106 [2024-10-01 20:39:04.397368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:118872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.106 [2024-10-01 20:39:04.397382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:40:12.106 [2024-10-01 20:39:04.397401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:119000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.106 [2024-10-01 20:39:04.397415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:40:12.106 [2024-10-01 20:39:04.399038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:118888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.106 [2024-10-01 20:39:04.399073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:40:12.106 [2024-10-01 20:39:04.399119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:119016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.106 [2024-10-01 20:39:04.399172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:40:12.106 [2024-10-01 20:39:04.399195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:119544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.106 [2024-10-01 20:39:04.399211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:40:12.106 [2024-10-01 20:39:04.399231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:119560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.106 [2024-10-01 20:39:04.399244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:40:12.106 [2024-10-01 20:39:04.399263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:119576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.106 [2024-10-01 20:39:04.399276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:40:12.106 [2024-10-01 20:39:04.399295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:119592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.106 [2024-10-01 20:39:04.399309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:40:12.106 [2024-10-01 20:39:04.399328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:119608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.106 [2024-10-01 20:39:04.399341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:40:12.106 [2024-10-01 20:39:04.399359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:119624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.106 [2024-10-01 20:39:04.399373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:40:12.106 [2024-10-01 20:39:04.399392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:119640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.106 [2024-10-01 20:39:04.399405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:40:12.106 [2024-10-01 20:39:04.399424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:119656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.106 [2024-10-01 20:39:04.399437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:40:12.106 [2024-10-01 20:39:04.399469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:119672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.106 [2024-10-01 20:39:04.399483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:40:12.106 [2024-10-01 20:39:04.399516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:119688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.106 [2024-10-01 20:39:04.399529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:40:12.106 [2024-10-01 20:39:04.399547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:119704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.106 [2024-10-01 20:39:04.399560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:40:12.106 [2024-10-01 20:39:04.399579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:119448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.106 [2024-10-01 20:39:04.399592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:40:12.106 [2024-10-01 20:39:04.399610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:119480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.106 [2024-10-01 20:39:04.399623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:40:12.106 [2024-10-01 20:39:04.399641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:119512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.106 [2024-10-01 20:39:04.399654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:40:12.106 [2024-10-01 20:39:04.399672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:119088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.106 [2024-10-01 20:39:04.399686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:12.106 [2024-10-01 20:39:04.399704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:119152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.106 [2024-10-01 20:39:04.399716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:40:12.106 [2024-10-01 20:39:04.399735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:119216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.106 [2024-10-01 20:39:04.399748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:40:12.106 [2024-10-01 20:39:04.399775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:119280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.107 [2024-10-01 20:39:04.399823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:40:12.107 [2024-10-01 20:39:04.400655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:119344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.107 [2024-10-01 20:39:04.400688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:40:12.107 [2024-10-01 20:39:04.400733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:119408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.107 [2024-10-01 20:39:04.400749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:40:12.107 [2024-10-01 20:39:04.400779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:118736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.107 [2024-10-01 20:39:04.400794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:40:12.107 [2024-10-01 20:39:04.400813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:118456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.107 [2024-10-01 20:39:04.400826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:40:12.107 [2024-10-01 20:39:04.400845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:118672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.107 [2024-10-01 20:39:04.400858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:40:12.107 [2024-10-01 20:39:04.400877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:118632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.107 [2024-10-01 20:39:04.400890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:40:12.107 [2024-10-01 20:39:04.400908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:119032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.107 [2024-10-01 20:39:04.400921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:40:12.107 [2024-10-01 20:39:04.400940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:119096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.107 [2024-10-01 20:39:04.400953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:40:12.107 [2024-10-01 20:39:04.400972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:119160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.107 [2024-10-01 20:39:04.400985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:40:12.107 [2024-10-01 20:39:04.401003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:119224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.107 [2024-10-01 20:39:04.401016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:40:12.107 [2024-10-01 20:39:04.401035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:119288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.107 [2024-10-01 20:39:04.401048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:40:12.107 [2024-10-01 20:39:04.401066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:119352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.107 [2024-10-01 20:39:04.401079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:40:12.107 [2024-10-01 20:39:04.401098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:119416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.107 [2024-10-01 20:39:04.401111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:40:12.107 [2024-10-01 20:39:04.401156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:119488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.107 [2024-10-01 20:39:04.401174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:40:12.107 [2024-10-01 20:39:04.401194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:118872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.107 [2024-10-01 20:39:04.401216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:40:12.107 [2024-10-01 20:39:04.401236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:119712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.107 [2024-10-01 20:39:04.401250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:40:12.107 [2024-10-01 20:39:04.401269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:119728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.107 [2024-10-01 20:39:04.401282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:40:12.107 [2024-10-01 20:39:04.401301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:119744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.107 [2024-10-01 20:39:04.401315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:40:12.107 [2024-10-01 20:39:04.401334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:119760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.107 [2024-10-01 20:39:04.401347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:40:12.107 [2024-10-01 20:39:04.401366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:119776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.107 [2024-10-01 20:39:04.401379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:40:12.107 [2024-10-01 20:39:04.401398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:119792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.107 [2024-10-01 20:39:04.401411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:40:12.107 [2024-10-01 20:39:04.401430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:119808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.107 [2024-10-01 20:39:04.401444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:40:12.107 [2024-10-01 20:39:04.401463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:119824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.107 [2024-10-01 20:39:04.401476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:40:12.107 [2024-10-01 20:39:04.401508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:119840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.107 [2024-10-01 20:39:04.401521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:40:12.107 [2024-10-01 20:39:04.401540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:119072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.107 [2024-10-01 20:39:04.401553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:40:12.107 [2024-10-01 20:39:04.401571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:119136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.107 [2024-10-01 20:39:04.401584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:40:12.107 [2024-10-01 20:39:04.401602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:119200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.108 [2024-10-01 20:39:04.401621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:40:12.108 [2024-10-01 20:39:04.401641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:119264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.108 [2024-10-01 20:39:04.401654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:40:12.108 [2024-10-01 20:39:04.401672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:119328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.108 [2024-10-01 20:39:04.401685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:12.108 [2024-10-01 20:39:04.401703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.108 [2024-10-01 20:39:04.401716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:40:12.108 [2024-10-01 20:39:04.401735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:118856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.108 [2024-10-01 20:39:04.401748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:40:12.108 [2024-10-01 20:39:04.401766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:118608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.108 [2024-10-01 20:39:04.401779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:40:12.108 [2024-10-01 20:39:04.401797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:119016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.108 [2024-10-01 20:39:04.401811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:40:12.108 [2024-10-01 20:39:04.401829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:119560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.108 [2024-10-01 20:39:04.401842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:40:12.108 [2024-10-01 20:39:04.401860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:119592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.108 [2024-10-01 20:39:04.401873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:40:12.108 [2024-10-01 20:39:04.401892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:119624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.108 [2024-10-01 20:39:04.401905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:40:12.108 [2024-10-01 20:39:04.401923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:119656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.108 [2024-10-01 20:39:04.401936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:40:12.108 [2024-10-01 20:39:04.401955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:119688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.108 [2024-10-01 20:39:04.401968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:40:12.108 [2024-10-01 20:39:04.401986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:119448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.108 [2024-10-01 20:39:04.401999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:40:12.108 [2024-10-01 20:39:04.402024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:119512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.108 [2024-10-01 20:39:04.402038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:40:12.108 [2024-10-01 20:39:04.402057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:119152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.108 [2024-10-01 20:39:04.402070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:40:12.108 [2024-10-01 20:39:04.402088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:119280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.108 [2024-10-01 20:39:04.402102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:40:12.108 [2024-10-01 20:39:04.403896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:119472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.108 [2024-10-01 20:39:04.403933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:40:12.108 [2024-10-01 20:39:04.403980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:119848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.108 [2024-10-01 20:39:04.403997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:40:12.108 [2024-10-01 20:39:04.404017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:119864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.108 [2024-10-01 20:39:04.404031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:40:12.108 [2024-10-01 20:39:04.404051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:119880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.108 [2024-10-01 20:39:04.404065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:40:12.108 [2024-10-01 20:39:04.404084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:119536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.108 [2024-10-01 20:39:04.404112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:40:12.108 [2024-10-01 20:39:04.404131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:119568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.108 [2024-10-01 20:39:04.404173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:40:12.108 [2024-10-01 20:39:04.404194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:119600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.108 [2024-10-01 20:39:04.404208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:40:12.108 [2024-10-01 20:39:04.404227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:119632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.108 [2024-10-01 20:39:04.404240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:40:12.108 [2024-10-01 20:39:04.404258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:119664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.108 [2024-10-01 20:39:04.404271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:40:12.108 [2024-10-01 20:39:04.404302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:119696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.108 [2024-10-01 20:39:04.404316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:40:12.108 [2024-10-01 20:39:04.404335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:119408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.108 [2024-10-01 20:39:04.404348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:40:12.108 [2024-10-01 20:39:04.404366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:118456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.109 [2024-10-01 20:39:04.404379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:40:12.109 [2024-10-01 20:39:04.404397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:118632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.109 [2024-10-01 20:39:04.404410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:40:12.109 [2024-10-01 20:39:04.404428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:119096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.109 [2024-10-01 20:39:04.404441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:40:12.109 [2024-10-01 20:39:04.404459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:119224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.109 [2024-10-01 20:39:04.404472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:40:12.109 [2024-10-01 20:39:04.404490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:119352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.109 [2024-10-01 20:39:04.404503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:40:12.109 [2024-10-01 20:39:04.404521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:119488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.109 [2024-10-01 20:39:04.404534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:40:12.109 [2024-10-01 20:39:04.404552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:119712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.109 [2024-10-01 20:39:04.404565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:40:12.109 [2024-10-01 20:39:04.404583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:119744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.109 [2024-10-01 20:39:04.404596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:12.109 [2024-10-01 20:39:04.404614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:119776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.109 [2024-10-01 20:39:04.404627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:40:12.109 [2024-10-01 20:39:04.404645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:119808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.109 [2024-10-01 20:39:04.404659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:40:12.109 [2024-10-01 20:39:04.404687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:119840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.109 [2024-10-01 20:39:04.404701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:40:12.109 [2024-10-01 20:39:04.404719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.109 [2024-10-01 20:39:04.404732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:40:12.109 [2024-10-01 20:39:04.404750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:119264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.109 [2024-10-01 20:39:04.404763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:40:12.109 [2024-10-01 20:39:04.404782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:119392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.109 [2024-10-01 20:39:04.404795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:40:12.109 [2024-10-01 20:39:04.404813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:118608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.109 [2024-10-01 20:39:04.404826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:40:12.109 [2024-10-01 20:39:04.404844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:119560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.109 [2024-10-01 20:39:04.404857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:40:12.109 [2024-10-01 20:39:04.404875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:119624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.109 [2024-10-01 20:39:04.404887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:40:12.109 [2024-10-01 20:39:04.404906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:119688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.109 [2024-10-01 20:39:04.404919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:40:12.109 [2024-10-01 20:39:04.404937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:119512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.109 [2024-10-01 20:39:04.404950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:40:12.109 [2024-10-01 20:39:04.404968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:119280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.109 [2024-10-01 20:39:04.404981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:40:12.109 [2024-10-01 20:39:04.405508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:119120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.109 [2024-10-01 20:39:04.405542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:40:12.109 [2024-10-01 20:39:04.405569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:119248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.109 [2024-10-01 20:39:04.405585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:40:12.109 [2024-10-01 20:39:04.405604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:119904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.109 [2024-10-01 20:39:04.405629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:40:12.109 [2024-10-01 20:39:04.405649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:119920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.109 [2024-10-01 20:39:04.405663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:40:12.109 [2024-10-01 20:39:04.405681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:119936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.109 [2024-10-01 20:39:04.405694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:40:12.109 [2024-10-01 20:39:04.405712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:119952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.109 [2024-10-01 20:39:04.405725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:40:12.109 [2024-10-01 20:39:04.405744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:119968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.109 [2024-10-01 20:39:04.405757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:40:12.109 [2024-10-01 20:39:04.405775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:119984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.109 [2024-10-01 20:39:04.405788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:40:12.109 [2024-10-01 20:39:04.405806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:120000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.110 [2024-10-01 20:39:04.405819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:40:12.110 [2024-10-01 20:39:04.405837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.110 [2024-10-01 20:39:04.405850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:40:12.110 [2024-10-01 20:39:04.405869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:119376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.110 [2024-10-01 20:39:04.405882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:40:12.110 [2024-10-01 20:39:04.407308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:119456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.110 [2024-10-01 20:39:04.407343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:40:12.110 [2024-10-01 20:39:04.407388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:119720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.110 [2024-10-01 20:39:04.407404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:40:12.110 [2024-10-01 20:39:04.407423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:119752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.110 [2024-10-01 20:39:04.407452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:40:12.110 [2024-10-01 20:39:04.407471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:120032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.110 [2024-10-01 20:39:04.407496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:40:12.110 [2024-10-01 20:39:04.407516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.110 [2024-10-01 20:39:04.407530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:40:12.110 [2024-10-01 20:39:04.407549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:119784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.110 [2024-10-01 20:39:04.407562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:40:12.110 [2024-10-01 20:39:04.407581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.110 [2024-10-01 20:39:04.407594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:12.110 [2024-10-01 20:39:04.407613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:119848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.110 [2024-10-01 20:39:04.407626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.110 [2024-10-01 20:39:04.407645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:119880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.110 [2024-10-01 20:39:04.407658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:12.110 [2024-10-01 20:39:04.407677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:119568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.110 [2024-10-01 20:39:04.407690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:40:12.110 [2024-10-01 20:39:04.407709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:119632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.110 [2024-10-01 20:39:04.407722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:40:12.110 [2024-10-01 20:39:04.407741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:119696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.110 [2024-10-01 20:39:04.407754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:40:12.110 [2024-10-01 20:39:04.407830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:118456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.110 [2024-10-01 20:39:04.407845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:40:12.110 [2024-10-01 20:39:04.407864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:119096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.110 [2024-10-01 20:39:04.407878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:40:12.110 [2024-10-01 20:39:04.407897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:119352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.110 [2024-10-01 20:39:04.407911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:40:12.110 [2024-10-01 20:39:04.407930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:119712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.110 [2024-10-01 20:39:04.407944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:40:12.110 [2024-10-01 20:39:04.407972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:119776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.110 [2024-10-01 20:39:04.407986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:40:12.110 [2024-10-01 20:39:04.408006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:119840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.110 [2024-10-01 20:39:04.408020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:40:12.110 [2024-10-01 20:39:04.408039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:119264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.110 [2024-10-01 20:39:04.408052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:40:12.110 [2024-10-01 20:39:04.408072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:118608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.111 [2024-10-01 20:39:04.408085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:40:12.111 [2024-10-01 20:39:04.408105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:119624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.111 [2024-10-01 20:39:04.408133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:40:12.111 [2024-10-01 20:39:04.408164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:119512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.111 [2024-10-01 20:39:04.408197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:40:12.111 [2024-10-01 20:39:04.408217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:119544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.111 [2024-10-01 20:39:04.408231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:40:12.111 [2024-10-01 20:39:04.408249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:119608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.111 [2024-10-01 20:39:04.408262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:40:12.111 [2024-10-01 20:39:04.408280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:119672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.111 [2024-10-01 20:39:04.408294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:40:12.111 [2024-10-01 20:39:04.408312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:119248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.111 [2024-10-01 20:39:04.408325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:40:12.111 [2024-10-01 20:39:04.408343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:119920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.111 [2024-10-01 20:39:04.408356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:40:12.111 [2024-10-01 20:39:04.408374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:119952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.111 [2024-10-01 20:39:04.408387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:40:12.111 [2024-10-01 20:39:04.408414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:119984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.111 [2024-10-01 20:39:04.408428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:40:12.111 [2024-10-01 20:39:04.408446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:120016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.111 [2024-10-01 20:39:04.408459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:40:12.111 [2024-10-01 20:39:04.408477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:119088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.111 [2024-10-01 20:39:04.408490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:40:12.111 [2024-10-01 20:39:04.408509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.111 [2024-10-01 20:39:04.408522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:40:12.111 [2024-10-01 20:39:04.408540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:120072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.111 [2024-10-01 20:39:04.408553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:40:12.111 [2024-10-01 20:39:04.408571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:120088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.111 [2024-10-01 20:39:04.408584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:40:12.111 [2024-10-01 20:39:04.408602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:120104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.111 [2024-10-01 20:39:04.408615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:40:12.111 [2024-10-01 20:39:04.408633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.111 [2024-10-01 20:39:04.408646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:40:12.111 [2024-10-01 20:39:04.408664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:120136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.111 [2024-10-01 20:39:04.408677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:40:12.111 [2024-10-01 20:39:04.408696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:119872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.111 [2024-10-01 20:39:04.408709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:40:12.111 [2024-10-01 20:39:04.408727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:119344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.111 [2024-10-01 20:39:04.408740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:40:12.111 [2024-10-01 20:39:04.410040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:119728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.111 [2024-10-01 20:39:04.410075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:40:12.111 [2024-10-01 20:39:04.410179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.111 [2024-10-01 20:39:04.410201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:12.111 [2024-10-01 20:39:04.410222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.111 [2024-10-01 20:39:04.410237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:40:12.111 [2024-10-01 20:39:04.410257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:120176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.111 [2024-10-01 20:39:04.410271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:40:12.111 [2024-10-01 20:39:04.410290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.111 [2024-10-01 20:39:04.410304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:40:12.111 [2024-10-01 20:39:04.410323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:120208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.111 [2024-10-01 20:39:04.410337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:40:12.111 [2024-10-01 20:39:04.410356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:119792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.111 [2024-10-01 20:39:04.410370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:40:12.111 [2024-10-01 20:39:04.410389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:119720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.111 [2024-10-01 20:39:04.410403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:40:12.111 [2024-10-01 20:39:04.410422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.112 [2024-10-01 20:39:04.410436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:40:12.112 [2024-10-01 20:39:04.410456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:119784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.112 [2024-10-01 20:39:04.410469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:40:12.112 [2024-10-01 20:39:04.410489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:119848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.112 [2024-10-01 20:39:04.410502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:40:12.112 [2024-10-01 20:39:04.410550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:119568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.112 [2024-10-01 20:39:04.410563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:40:12.112 [2024-10-01 20:39:04.410581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:119696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.112 [2024-10-01 20:39:04.410594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:40:12.112 [2024-10-01 20:39:04.410612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:119096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.112 [2024-10-01 20:39:04.410634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:40:12.112 [2024-10-01 20:39:04.411047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:119712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.112 [2024-10-01 20:39:04.411081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:40:12.112 [2024-10-01 20:39:04.411132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:119840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.112 [2024-10-01 20:39:04.411153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:40:12.112 [2024-10-01 20:39:04.411172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:118608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.112 [2024-10-01 20:39:04.411187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:40:12.112 [2024-10-01 20:39:04.411206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:119512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.112 [2024-10-01 20:39:04.411219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:40:12.112 [2024-10-01 20:39:04.411237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:119608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.112 [2024-10-01 20:39:04.411250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:40:12.112 [2024-10-01 20:39:04.411268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:119248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.112 [2024-10-01 20:39:04.411282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:40:12.112 [2024-10-01 20:39:04.411300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:119952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.112 [2024-10-01 20:39:04.411313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:40:12.112 [2024-10-01 20:39:04.411331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:120016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.112 [2024-10-01 20:39:04.411344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:40:12.112 [2024-10-01 20:39:04.411362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:120056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.112 [2024-10-01 20:39:04.411375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:40:12.112 [2024-10-01 20:39:04.411394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:120088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.112 [2024-10-01 20:39:04.411406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:40:12.112 [2024-10-01 20:39:04.411424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:120120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:12.112 [2024-10-01 20:39:04.411437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:40:12.112 [2024-10-01 20:39:04.411456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:119872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.112 [2024-10-01 20:39:04.411479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:40:12.112 [2024-10-01 20:39:04.411499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:119592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.112 [2024-10-01 20:39:04.411512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:40:12.112 [2024-10-01 20:39:04.411531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:119152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.112 [2024-10-01 20:39:04.411544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:40:12.112 [2024-10-01 20:39:04.411562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:119912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.112 [2024-10-01 20:39:04.411575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:40:12.112 [2024-10-01 20:39:04.411593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:119944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.112 [2024-10-01 20:39:04.411606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:40:12.112 [2024-10-01 20:39:04.411625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.112 [2024-10-01 20:39:04.411637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:40:12.112 [2024-10-01 20:39:04.411656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:120008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.112 [2024-10-01 20:39:04.411668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:40:12.112 [2024-10-01 20:39:04.411687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:120040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.112 [2024-10-01 20:39:04.411700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:40:12.112 [2024-10-01 20:39:04.411718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:119408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.112 [2024-10-01 20:39:04.411731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:12.112 [2024-10-01 20:39:04.411749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:119744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.112 [2024-10-01 20:39:04.411762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:40:12.112 [2024-10-01 20:39:04.411824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:119560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.112 [2024-10-01 20:39:04.411839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:40:12.112 [2024-10-01 20:39:04.411860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:119280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.113 [2024-10-01 20:39:04.411874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:40:12.113 [2024-10-01 20:39:04.414085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:119936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.113 [2024-10-01 20:39:04.414173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:40:12.113 [2024-10-01 20:39:04.414205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:120000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.113 [2024-10-01 20:39:04.414222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:40:12.113 [2024-10-01 20:39:04.414241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:120080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.113 [2024-10-01 20:39:04.414255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:40:12.113 [2024-10-01 20:39:04.414274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:120112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.113 [2024-10-01 20:39:04.414287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:40:12.113 9474.78 IOPS, 37.01 MiB/s 9495.88 IOPS, 37.09 MiB/s Received shutdown signal, test time was about 33.662247 seconds 00:40:12.113 00:40:12.113 Latency(us) 00:40:12.113 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:12.113 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:40:12.113 Verification LBA range: start 0x0 length 0x4000 00:40:12.113 Nvme0n1 : 33.66 9506.80 37.14 0.00 0.00 13438.24 140.57 4026531.84 00:40:12.113 =================================================================================================================== 00:40:12.113 Total : 9506.80 37.14 0.00 0.00 13438.24 140.57 4026531.84 00:40:12.113 [2024-10-01 20:39:06.986543] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:40:12.113 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:12.395 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:40:12.395 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:40:12.395 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:40:12.395 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:40:12.395 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:40:12.395 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:12.395 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:40:12.395 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:12.395 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:12.395 rmmod nvme_tcp 00:40:12.395 rmmod nvme_fabrics 00:40:12.395 rmmod nvme_keyring 00:40:12.395 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:12.395 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:40:12.395 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:40:12.395 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 89214 ']' 00:40:12.395 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 89214 00:40:12.395 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 89214 ']' 00:40:12.395 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 89214 00:40:12.395 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:40:12.395 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:12.395 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89214 00:40:12.395 killing process with pid 89214 00:40:12.395 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:12.396 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:12.396 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89214' 00:40:12.396 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 89214 00:40:12.396 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 89214 00:40:12.667 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:40:12.667 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:40:12.667 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:40:12.667 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:40:12.667 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-save 00:40:12.667 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:40:12.667 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-restore 00:40:12.667 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:12.667 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:40:12.667 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:40:12.667 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:40:12.667 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:40:12.667 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:40:12.667 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:40:12.667 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:40:12.667 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:40:12.667 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:40:12.667 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:40:12.926 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:40:12.926 20:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:40:12.926 20:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:40:12.926 20:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:40:12.926 20:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:40:12.926 20:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:12.926 20:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:12.926 20:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:12.926 20:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:40:12.926 00:40:12.926 real 0m39.211s 00:40:12.926 user 2m8.372s 00:40:12.926 sys 0m9.302s 00:40:12.926 20:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:12.926 ************************************ 00:40:12.926 END TEST nvmf_host_multipath_status 00:40:12.926 ************************************ 00:40:12.926 20:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:40:12.926 20:39:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:40:12.926 20:39:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:40:12.926 20:39:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:12.926 20:39:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:40:12.926 ************************************ 00:40:12.926 START TEST nvmf_discovery_remove_ifc 00:40:12.926 ************************************ 00:40:12.926 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:40:13.187 * Looking for test storage... 00:40:13.187 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:40:13.187 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:13.187 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:40:13.187 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:13.187 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:13.187 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:13.187 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:13.187 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:13.187 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:40:13.187 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:40:13.187 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:40:13.187 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:40:13.187 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:40:13.187 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:40:13.187 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:40:13.187 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:13.187 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:40:13.187 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:40:13.187 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:13.187 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:13.187 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:40:13.187 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:40:13.187 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:13.187 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:40:13.187 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:40:13.187 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:40:13.187 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:40:13.187 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:13.187 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:40:13.187 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:40:13.187 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:13.187 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:13.187 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:40:13.187 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:13.187 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:13.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:13.187 --rc genhtml_branch_coverage=1 00:40:13.187 --rc genhtml_function_coverage=1 00:40:13.187 --rc genhtml_legend=1 00:40:13.187 --rc geninfo_all_blocks=1 00:40:13.187 --rc geninfo_unexecuted_blocks=1 00:40:13.187 00:40:13.187 ' 00:40:13.187 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:13.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:13.187 --rc genhtml_branch_coverage=1 00:40:13.187 --rc genhtml_function_coverage=1 00:40:13.187 --rc genhtml_legend=1 00:40:13.187 --rc geninfo_all_blocks=1 00:40:13.187 --rc geninfo_unexecuted_blocks=1 00:40:13.187 00:40:13.187 ' 00:40:13.187 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:13.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:13.187 --rc genhtml_branch_coverage=1 00:40:13.187 --rc genhtml_function_coverage=1 00:40:13.187 --rc genhtml_legend=1 00:40:13.187 --rc geninfo_all_blocks=1 00:40:13.187 --rc geninfo_unexecuted_blocks=1 00:40:13.187 00:40:13.187 ' 00:40:13.187 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:13.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:13.187 --rc genhtml_branch_coverage=1 00:40:13.187 --rc genhtml_function_coverage=1 00:40:13.187 --rc genhtml_legend=1 00:40:13.187 --rc geninfo_all_blocks=1 00:40:13.187 --rc geninfo_unexecuted_blocks=1 00:40:13.187 00:40:13.187 ' 00:40:13.187 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:13.188 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # prepare_net_devs 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@458 -- # nvmf_veth_init 00:40:13.188 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:13.189 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:40:13.189 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:40:13.189 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:40:13.189 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:13.189 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:40:13.189 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:40:13.189 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:40:13.189 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:40:13.189 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:40:13.189 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:40:13.189 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:13.189 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:40:13.189 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:40:13.189 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:40:13.189 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:40:13.189 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:40:13.189 Cannot find device "nvmf_init_br" 00:40:13.189 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:40:13.189 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:40:13.189 Cannot find device "nvmf_init_br2" 00:40:13.189 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:40:13.189 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:40:13.189 Cannot find device "nvmf_tgt_br" 00:40:13.189 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:40:13.189 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:40:13.189 Cannot find device "nvmf_tgt_br2" 00:40:13.189 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:40:13.189 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:40:13.189 Cannot find device "nvmf_init_br" 00:40:13.189 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:40:13.189 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:40:13.189 Cannot find device "nvmf_init_br2" 00:40:13.189 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:40:13.189 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:40:13.189 Cannot find device "nvmf_tgt_br" 00:40:13.189 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:40:13.189 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:40:13.189 Cannot find device "nvmf_tgt_br2" 00:40:13.189 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:40:13.189 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:40:13.189 Cannot find device "nvmf_br" 00:40:13.189 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:40:13.189 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:40:13.189 Cannot find device "nvmf_init_if" 00:40:13.189 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:40:13.189 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:40:13.449 Cannot find device "nvmf_init_if2" 00:40:13.449 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:40:13.449 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:40:13.449 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:13.449 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:40:13.449 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:40:13.449 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:13.449 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:40:13.449 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:40:13.449 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:40:13.449 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:40:13.449 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:40:13.449 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:40:13.449 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:40:13.449 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:40:13.449 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:40:13.449 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:40:13.449 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:40:13.449 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:40:13.449 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:40:13.449 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:40:13.449 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:40:13.449 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:40:13.449 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:40:13.449 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:40:13.449 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:40:13.449 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:40:13.449 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:40:13.449 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:40:13.450 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:40:13.450 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:40:13.450 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:40:13.450 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:40:13.450 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:40:13.450 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:40:13.450 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:40:13.450 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:40:13.450 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:40:13.450 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:40:13.450 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:40:13.450 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:40:13.450 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:40:13.450 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:40:13.450 00:40:13.450 --- 10.0.0.3 ping statistics --- 00:40:13.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:13.450 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:40:13.450 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:40:13.450 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:40:13.450 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:40:13.450 00:40:13.450 --- 10.0.0.4 ping statistics --- 00:40:13.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:13.450 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:40:13.450 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:40:13.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:13.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:40:13.450 00:40:13.450 --- 10.0.0.1 ping statistics --- 00:40:13.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:13.450 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:40:13.450 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:40:13.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:13.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:40:13.450 00:40:13.450 --- 10.0.0.2 ping statistics --- 00:40:13.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:13.450 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:40:13.450 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:13.450 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # return 0 00:40:13.450 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:40:13.450 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:13.450 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:40:13.450 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:40:13.450 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:13.450 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:40:13.450 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:40:13.450 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:40:13.450 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:40:13.450 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:13.450 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:13.709 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # nvmfpid=90661 00:40:13.709 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:40:13.709 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # waitforlisten 90661 00:40:13.709 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 90661 ']' 00:40:13.709 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:13.709 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:13.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:13.709 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:13.709 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:13.709 20:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:13.709 [2024-10-01 20:39:08.762735] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:40:13.709 [2024-10-01 20:39:08.762824] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:13.709 [2024-10-01 20:39:08.900846] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:13.709 [2024-10-01 20:39:08.951212] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:13.710 [2024-10-01 20:39:08.951276] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:13.710 [2024-10-01 20:39:08.951301] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:13.710 [2024-10-01 20:39:08.951308] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:13.710 [2024-10-01 20:39:08.951314] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:13.710 [2024-10-01 20:39:08.951338] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:13.969 20:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:13.969 20:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:40:13.969 20:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:40:13.969 20:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:13.969 20:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:13.969 20:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:13.969 20:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:40:13.969 20:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:13.969 20:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:13.969 [2024-10-01 20:39:09.133984] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:13.969 [2024-10-01 20:39:09.142107] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:40:13.969 null0 00:40:13.969 [2024-10-01 20:39:09.174032] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:40:13.969 20:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:13.969 20:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=90692 00:40:13.969 20:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:40:13.969 20:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 90692 /tmp/host.sock 00:40:13.969 20:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 90692 ']' 00:40:13.969 20:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:40:13.969 20:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:13.969 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:40:13.969 20:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:40:13.969 20:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:13.969 20:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:14.229 [2024-10-01 20:39:09.261044] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:40:14.229 [2024-10-01 20:39:09.261345] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90692 ] 00:40:14.229 [2024-10-01 20:39:09.394076] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:14.229 [2024-10-01 20:39:09.447924] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:15.164 20:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:15.164 20:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:40:15.164 20:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:15.164 20:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:40:15.164 20:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:15.164 20:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:15.164 20:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:15.164 20:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:40:15.164 20:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:15.164 20:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:15.164 20:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:15.164 20:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:40:15.164 20:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:15.164 20:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:16.540 [2024-10-01 20:39:11.366896] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:40:16.540 [2024-10-01 20:39:11.366940] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:40:16.540 [2024-10-01 20:39:11.366958] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:40:16.540 [2024-10-01 20:39:11.452983] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:40:16.540 [2024-10-01 20:39:11.509375] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:40:16.540 [2024-10-01 20:39:11.509461] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:40:16.540 [2024-10-01 20:39:11.509485] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:40:16.540 [2024-10-01 20:39:11.509499] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:40:16.540 [2024-10-01 20:39:11.509518] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:40:16.540 20:39:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:16.540 20:39:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:40:16.540 20:39:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:40:16.540 20:39:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:16.540 [2024-10-01 20:39:11.515661] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xdc68e0 was disconnected and freed. delete nvme_qpair. 00:40:16.540 20:39:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:40:16.540 20:39:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:16.540 20:39:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:16.540 20:39:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:40:16.540 20:39:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:40:16.540 20:39:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:16.540 20:39:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:40:16.540 20:39:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:40:16.540 20:39:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:40:16.540 20:39:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:40:16.540 20:39:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:40:16.540 20:39:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:16.540 20:39:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:40:16.540 20:39:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:40:16.540 20:39:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:16.540 20:39:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:40:16.540 20:39:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:16.540 20:39:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:16.540 20:39:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:40:16.540 20:39:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:40:17.473 20:39:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:40:17.473 20:39:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:17.473 20:39:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:17.473 20:39:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:40:17.473 20:39:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:17.473 20:39:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:40:17.473 20:39:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:40:17.473 20:39:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:17.473 20:39:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:40:17.473 20:39:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:40:18.846 20:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:40:18.846 20:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:18.846 20:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:18.846 20:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:18.846 20:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:40:18.846 20:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:40:18.846 20:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:40:18.846 20:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:18.846 20:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:40:18.846 20:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:40:19.779 20:39:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:40:19.779 20:39:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:19.779 20:39:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:19.779 20:39:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:19.779 20:39:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:40:19.779 20:39:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:40:19.779 20:39:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:40:19.779 20:39:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:19.779 20:39:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:40:19.779 20:39:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:40:20.711 20:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:40:20.711 20:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:20.711 20:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:40:20.711 20:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:40:20.711 20:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:20.711 20:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:20.711 20:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:40:20.711 20:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:20.711 20:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:40:20.711 20:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:40:21.642 20:39:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:40:21.642 20:39:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:21.642 20:39:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:21.642 20:39:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:40:21.642 20:39:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:40:21.642 20:39:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:21.642 20:39:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:40:21.899 20:39:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:21.899 [2024-10-01 20:39:16.937779] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:40:21.899 [2024-10-01 20:39:16.938034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:40:21.899 [2024-10-01 20:39:16.938204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:21.900 [2024-10-01 20:39:16.938505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:21.900 [2024-10-01 20:39:16.938629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:21.900 [2024-10-01 20:39:16.938691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:21.900 [2024-10-01 20:39:16.938820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:21.900 [2024-10-01 20:39:16.938874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:21.900 [2024-10-01 20:39:16.939007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:21.900 [2024-10-01 20:39:16.939061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:40:21.900 [2024-10-01 20:39:16.939179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:21.900 [2024-10-01 20:39:16.939193] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39220 is same with the state(6) to be set 00:40:21.900 20:39:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:40:21.900 20:39:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:40:21.900 [2024-10-01 20:39:16.947781] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd39220 (9): Bad file descriptor 00:40:21.900 [2024-10-01 20:39:16.957795] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:40:22.835 20:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:40:22.835 20:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:22.835 20:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:40:22.835 20:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:22.835 20:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:22.835 20:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:40:22.835 20:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:40:22.835 [2024-10-01 20:39:17.996226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:40:22.835 [2024-10-01 20:39:17.996344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd39220 with addr=10.0.0.3, port=4420 00:40:22.835 [2024-10-01 20:39:17.996376] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39220 is same with the state(6) to be set 00:40:22.835 [2024-10-01 20:39:17.996429] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd39220 (9): Bad file descriptor 00:40:22.835 [2024-10-01 20:39:17.997323] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:40:22.835 [2024-10-01 20:39:17.997401] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:40:22.835 [2024-10-01 20:39:17.997424] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:40:22.835 [2024-10-01 20:39:17.997446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:40:22.835 [2024-10-01 20:39:17.997486] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:22.835 [2024-10-01 20:39:17.997508] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:40:22.835 20:39:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:22.835 20:39:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:40:22.835 20:39:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:40:23.771 [2024-10-01 20:39:18.997559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:40:23.771 [2024-10-01 20:39:18.997592] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:40:23.771 [2024-10-01 20:39:18.997616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:40:23.771 [2024-10-01 20:39:18.997624] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:40:23.771 [2024-10-01 20:39:18.997641] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:23.771 [2024-10-01 20:39:18.997664] bdev_nvme.c:6913:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:40:23.771 [2024-10-01 20:39:18.997698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:40:23.771 [2024-10-01 20:39:18.997711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:23.771 [2024-10-01 20:39:18.997723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:23.771 [2024-10-01 20:39:18.997730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:23.771 [2024-10-01 20:39:18.997739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:23.771 [2024-10-01 20:39:18.997746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:23.771 [2024-10-01 20:39:18.997754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:23.771 [2024-10-01 20:39:18.997762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:23.771 [2024-10-01 20:39:18.997770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:40:23.771 [2024-10-01 20:39:18.997778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:23.771 [2024-10-01 20:39:18.997785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:40:23.771 [2024-10-01 20:39:18.998439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd38a90 (9): Bad file descriptor 00:40:23.771 [2024-10-01 20:39:18.999450] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:40:23.771 [2024-10-01 20:39:18.999469] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:40:24.028 20:39:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:40:24.028 20:39:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:24.028 20:39:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:40:24.028 20:39:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:24.028 20:39:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:40:24.028 20:39:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:40:24.028 20:39:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:24.028 20:39:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:24.028 20:39:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:40:24.028 20:39:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:40:24.028 20:39:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:40:24.028 20:39:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:40:24.028 20:39:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:40:24.028 20:39:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:24.028 20:39:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:40:24.028 20:39:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:40:24.028 20:39:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:24.028 20:39:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:24.028 20:39:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:40:24.028 20:39:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:24.028 20:39:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:40:24.028 20:39:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:40:24.964 20:39:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:40:24.964 20:39:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:24.964 20:39:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:40:24.964 20:39:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:24.964 20:39:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:24.964 20:39:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:40:24.964 20:39:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:40:24.964 20:39:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:24.964 20:39:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:40:24.964 20:39:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:40:25.900 [2024-10-01 20:39:21.009600] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:40:25.900 [2024-10-01 20:39:21.009622] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:40:25.900 [2024-10-01 20:39:21.009637] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:40:25.900 [2024-10-01 20:39:21.095694] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:40:25.900 [2024-10-01 20:39:21.151718] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:40:25.900 [2024-10-01 20:39:21.151759] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:40:25.900 [2024-10-01 20:39:21.151806] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:40:25.900 [2024-10-01 20:39:21.151838] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:40:25.900 [2024-10-01 20:39:21.151846] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:40:26.159 [2024-10-01 20:39:21.158048] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xd9f680 was disconnected and freed. delete nvme_qpair. 00:40:26.159 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:40:26.159 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:26.159 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:40:26.159 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:40:26.159 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:40:26.159 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:26.159 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:26.159 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:26.159 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:40:26.159 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:40:26.159 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 90692 00:40:26.159 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 90692 ']' 00:40:26.159 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 90692 00:40:26.159 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:40:26.159 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:26.159 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90692 00:40:26.159 killing process with pid 90692 00:40:26.159 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:26.159 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:26.159 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90692' 00:40:26.159 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 90692 00:40:26.159 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 90692 00:40:26.418 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:40:26.418 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # nvmfcleanup 00:40:26.418 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:40:26.418 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:26.418 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:40:26.418 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:26.418 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:26.418 rmmod nvme_tcp 00:40:26.418 rmmod nvme_fabrics 00:40:26.418 rmmod nvme_keyring 00:40:26.418 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:26.418 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:40:26.418 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:40:26.418 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@515 -- # '[' -n 90661 ']' 00:40:26.418 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # killprocess 90661 00:40:26.418 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 90661 ']' 00:40:26.418 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 90661 00:40:26.418 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:40:26.418 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:26.418 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90661 00:40:26.418 killing process with pid 90661 00:40:26.418 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:40:26.418 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:40:26.418 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90661' 00:40:26.418 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 90661 00:40:26.418 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 90661 00:40:26.677 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:40:26.677 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:40:26.677 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:40:26.677 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:40:26.677 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-save 00:40:26.677 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:40:26.677 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-restore 00:40:26.677 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:26.677 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:40:26.677 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:40:26.677 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:40:26.677 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:40:26.677 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:40:26.677 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:40:26.677 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:40:26.677 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:40:26.677 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:40:26.677 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:40:26.936 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:40:26.936 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:40:26.936 20:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:40:26.936 20:39:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:40:26.936 20:39:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:40:26.936 20:39:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:26.936 20:39:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:26.936 20:39:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:26.936 20:39:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:40:26.936 00:40:26.936 real 0m13.925s 00:40:26.936 user 0m24.978s 00:40:26.936 sys 0m1.601s 00:40:26.936 20:39:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:26.936 ************************************ 00:40:26.936 END TEST nvmf_discovery_remove_ifc 00:40:26.936 ************************************ 00:40:26.936 20:39:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:40:26.936 20:39:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:40:26.936 20:39:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:40:26.936 20:39:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:26.936 20:39:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:40:26.936 ************************************ 00:40:26.936 START TEST nvmf_identify_kernel_target 00:40:26.936 ************************************ 00:40:26.936 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:40:26.936 * Looking for test storage... 00:40:26.936 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:40:26.936 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:27.196 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:40:27.196 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:27.196 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:27.196 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:27.196 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:27.196 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:27.196 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:40:27.196 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:40:27.196 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:40:27.196 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:40:27.196 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:40:27.196 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:40:27.196 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:40:27.196 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:27.196 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:40:27.196 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:40:27.196 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:27.196 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:27.196 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:40:27.196 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:40:27.196 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:27.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.197 --rc genhtml_branch_coverage=1 00:40:27.197 --rc genhtml_function_coverage=1 00:40:27.197 --rc genhtml_legend=1 00:40:27.197 --rc geninfo_all_blocks=1 00:40:27.197 --rc geninfo_unexecuted_blocks=1 00:40:27.197 00:40:27.197 ' 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:27.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.197 --rc genhtml_branch_coverage=1 00:40:27.197 --rc genhtml_function_coverage=1 00:40:27.197 --rc genhtml_legend=1 00:40:27.197 --rc geninfo_all_blocks=1 00:40:27.197 --rc geninfo_unexecuted_blocks=1 00:40:27.197 00:40:27.197 ' 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:27.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.197 --rc genhtml_branch_coverage=1 00:40:27.197 --rc genhtml_function_coverage=1 00:40:27.197 --rc genhtml_legend=1 00:40:27.197 --rc geninfo_all_blocks=1 00:40:27.197 --rc geninfo_unexecuted_blocks=1 00:40:27.197 00:40:27.197 ' 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:27.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.197 --rc genhtml_branch_coverage=1 00:40:27.197 --rc genhtml_function_coverage=1 00:40:27.197 --rc genhtml_legend=1 00:40:27.197 --rc geninfo_all_blocks=1 00:40:27.197 --rc geninfo_unexecuted_blocks=1 00:40:27.197 00:40:27.197 ' 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:27.197 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:27.197 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # nvmf_veth_init 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:40:27.198 Cannot find device "nvmf_init_br" 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:40:27.198 Cannot find device "nvmf_init_br2" 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:40:27.198 Cannot find device "nvmf_tgt_br" 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:40:27.198 Cannot find device "nvmf_tgt_br2" 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:40:27.198 Cannot find device "nvmf_init_br" 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:40:27.198 Cannot find device "nvmf_init_br2" 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:40:27.198 Cannot find device "nvmf_tgt_br" 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:40:27.198 Cannot find device "nvmf_tgt_br2" 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:40:27.198 Cannot find device "nvmf_br" 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:40:27.198 Cannot find device "nvmf_init_if" 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:40:27.198 Cannot find device "nvmf_init_if2" 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:40:27.198 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:40:27.457 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:27.457 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:40:27.457 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:40:27.457 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:27.457 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:40:27.457 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:40:27.457 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:40:27.457 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:40:27.457 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:40:27.457 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:40:27.457 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:40:27.457 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:40:27.457 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:40:27.457 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:40:27.457 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:40:27.457 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:40:27.457 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:40:27.457 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:40:27.457 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:40:27.457 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:40:27.458 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:40:27.458 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:40:27.458 00:40:27.458 --- 10.0.0.3 ping statistics --- 00:40:27.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:27.458 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:40:27.458 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:40:27.458 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:40:27.458 00:40:27.458 --- 10.0.0.4 ping statistics --- 00:40:27.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:27.458 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:40:27.458 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:27.458 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:40:27.458 00:40:27.458 --- 10.0.0.1 ping statistics --- 00:40:27.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:27.458 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:40:27.458 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:27.458 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:40:27.458 00:40:27.458 --- 10.0.0.2 ping statistics --- 00:40:27.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:27.458 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # return 0 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:40:27.458 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:40:27.717 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:40:27.717 20:39:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:40:27.976 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:40:27.976 Waiting for block devices as requested 00:40:27.976 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:40:27.976 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:40:28.236 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:40:28.236 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:40:28.236 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:40:28.236 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:40:28.236 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:40:28.236 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:40:28.236 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:40:28.236 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:40:28.236 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:40:28.236 No valid GPT data, bailing 00:40:28.236 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:40:28.236 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:40:28.236 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:40:28.236 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:40:28.236 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:40:28.236 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n2 ]] 00:40:28.236 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n2 00:40:28.236 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:40:28.236 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:40:28.236 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:40:28.236 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n2 00:40:28.236 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:40:28.236 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:40:28.236 No valid GPT data, bailing 00:40:28.236 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:40:28.236 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:40:28.236 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:40:28.236 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n2 00:40:28.236 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:40:28.236 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n3 ]] 00:40:28.236 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n3 00:40:28.236 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:40:28.236 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:40:28.236 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:40:28.236 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n3 00:40:28.236 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:40:28.236 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:40:28.496 No valid GPT data, bailing 00:40:28.496 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:40:28.496 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:40:28.496 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:40:28.496 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n3 00:40:28.496 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:40:28.496 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme1n1 ]] 00:40:28.496 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme1n1 00:40:28.496 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:40:28.496 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:40:28.496 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:40:28.496 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme1n1 00:40:28.496 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:40:28.496 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:40:28.496 No valid GPT data, bailing 00:40:28.496 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:40:28.496 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:40:28.496 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:40:28.496 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme1n1 00:40:28.496 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme1n1 ]] 00:40:28.496 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:28.496 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:28.496 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:40:28.496 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:40:28.496 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:40:28.496 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme1n1 00:40:28.496 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:40:28.496 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:40:28.496 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo tcp 00:40:28.496 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:40:28.496 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:40:28.496 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:40:28.496 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid=e1b29161-2f6e-4e88-9903-c5274a848703 -a 10.0.0.1 -t tcp -s 4420 00:40:28.496 00:40:28.496 Discovery Log Number of Records 2, Generation counter 2 00:40:28.496 =====Discovery Log Entry 0====== 00:40:28.496 trtype: tcp 00:40:28.496 adrfam: ipv4 00:40:28.496 subtype: current discovery subsystem 00:40:28.496 treq: not specified, sq flow control disable supported 00:40:28.496 portid: 1 00:40:28.496 trsvcid: 4420 00:40:28.496 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:40:28.496 traddr: 10.0.0.1 00:40:28.496 eflags: none 00:40:28.496 sectype: none 00:40:28.496 =====Discovery Log Entry 1====== 00:40:28.496 trtype: tcp 00:40:28.496 adrfam: ipv4 00:40:28.496 subtype: nvme subsystem 00:40:28.496 treq: not specified, sq flow control disable supported 00:40:28.496 portid: 1 00:40:28.496 trsvcid: 4420 00:40:28.496 subnqn: nqn.2016-06.io.spdk:testnqn 00:40:28.496 traddr: 10.0.0.1 00:40:28.496 eflags: none 00:40:28.496 sectype: none 00:40:28.496 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:40:28.496 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:40:28.757 ===================================================== 00:40:28.757 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:40:28.757 ===================================================== 00:40:28.757 Controller Capabilities/Features 00:40:28.757 ================================ 00:40:28.757 Vendor ID: 0000 00:40:28.757 Subsystem Vendor ID: 0000 00:40:28.757 Serial Number: 736ccb2f075c647ceb9e 00:40:28.757 Model Number: Linux 00:40:28.757 Firmware Version: 6.8.9-20 00:40:28.757 Recommended Arb Burst: 0 00:40:28.757 IEEE OUI Identifier: 00 00 00 00:40:28.757 Multi-path I/O 00:40:28.757 May have multiple subsystem ports: No 00:40:28.757 May have multiple controllers: No 00:40:28.757 Associated with SR-IOV VF: No 00:40:28.757 Max Data Transfer Size: Unlimited 00:40:28.757 Max Number of Namespaces: 0 00:40:28.757 Max Number of I/O Queues: 1024 00:40:28.757 NVMe Specification Version (VS): 1.3 00:40:28.757 NVMe Specification Version (Identify): 1.3 00:40:28.757 Maximum Queue Entries: 1024 00:40:28.757 Contiguous Queues Required: No 00:40:28.757 Arbitration Mechanisms Supported 00:40:28.757 Weighted Round Robin: Not Supported 00:40:28.757 Vendor Specific: Not Supported 00:40:28.757 Reset Timeout: 7500 ms 00:40:28.757 Doorbell Stride: 4 bytes 00:40:28.757 NVM Subsystem Reset: Not Supported 00:40:28.757 Command Sets Supported 00:40:28.757 NVM Command Set: Supported 00:40:28.757 Boot Partition: Not Supported 00:40:28.757 Memory Page Size Minimum: 4096 bytes 00:40:28.757 Memory Page Size Maximum: 4096 bytes 00:40:28.757 Persistent Memory Region: Not Supported 00:40:28.757 Optional Asynchronous Events Supported 00:40:28.757 Namespace Attribute Notices: Not Supported 00:40:28.757 Firmware Activation Notices: Not Supported 00:40:28.757 ANA Change Notices: Not Supported 00:40:28.757 PLE Aggregate Log Change Notices: Not Supported 00:40:28.757 LBA Status Info Alert Notices: Not Supported 00:40:28.757 EGE Aggregate Log Change Notices: Not Supported 00:40:28.757 Normal NVM Subsystem Shutdown event: Not Supported 00:40:28.757 Zone Descriptor Change Notices: Not Supported 00:40:28.757 Discovery Log Change Notices: Supported 00:40:28.757 Controller Attributes 00:40:28.757 128-bit Host Identifier: Not Supported 00:40:28.757 Non-Operational Permissive Mode: Not Supported 00:40:28.757 NVM Sets: Not Supported 00:40:28.757 Read Recovery Levels: Not Supported 00:40:28.757 Endurance Groups: Not Supported 00:40:28.757 Predictable Latency Mode: Not Supported 00:40:28.757 Traffic Based Keep ALive: Not Supported 00:40:28.757 Namespace Granularity: Not Supported 00:40:28.757 SQ Associations: Not Supported 00:40:28.757 UUID List: Not Supported 00:40:28.757 Multi-Domain Subsystem: Not Supported 00:40:28.757 Fixed Capacity Management: Not Supported 00:40:28.757 Variable Capacity Management: Not Supported 00:40:28.757 Delete Endurance Group: Not Supported 00:40:28.757 Delete NVM Set: Not Supported 00:40:28.757 Extended LBA Formats Supported: Not Supported 00:40:28.757 Flexible Data Placement Supported: Not Supported 00:40:28.757 00:40:28.757 Controller Memory Buffer Support 00:40:28.757 ================================ 00:40:28.757 Supported: No 00:40:28.757 00:40:28.757 Persistent Memory Region Support 00:40:28.757 ================================ 00:40:28.757 Supported: No 00:40:28.757 00:40:28.757 Admin Command Set Attributes 00:40:28.757 ============================ 00:40:28.757 Security Send/Receive: Not Supported 00:40:28.757 Format NVM: Not Supported 00:40:28.757 Firmware Activate/Download: Not Supported 00:40:28.757 Namespace Management: Not Supported 00:40:28.757 Device Self-Test: Not Supported 00:40:28.757 Directives: Not Supported 00:40:28.757 NVMe-MI: Not Supported 00:40:28.757 Virtualization Management: Not Supported 00:40:28.757 Doorbell Buffer Config: Not Supported 00:40:28.757 Get LBA Status Capability: Not Supported 00:40:28.757 Command & Feature Lockdown Capability: Not Supported 00:40:28.757 Abort Command Limit: 1 00:40:28.757 Async Event Request Limit: 1 00:40:28.757 Number of Firmware Slots: N/A 00:40:28.757 Firmware Slot 1 Read-Only: N/A 00:40:28.757 Firmware Activation Without Reset: N/A 00:40:28.757 Multiple Update Detection Support: N/A 00:40:28.757 Firmware Update Granularity: No Information Provided 00:40:28.757 Per-Namespace SMART Log: No 00:40:28.757 Asymmetric Namespace Access Log Page: Not Supported 00:40:28.757 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:40:28.757 Command Effects Log Page: Not Supported 00:40:28.757 Get Log Page Extended Data: Supported 00:40:28.757 Telemetry Log Pages: Not Supported 00:40:28.757 Persistent Event Log Pages: Not Supported 00:40:28.757 Supported Log Pages Log Page: May Support 00:40:28.757 Commands Supported & Effects Log Page: Not Supported 00:40:28.757 Feature Identifiers & Effects Log Page:May Support 00:40:28.757 NVMe-MI Commands & Effects Log Page: May Support 00:40:28.757 Data Area 4 for Telemetry Log: Not Supported 00:40:28.757 Error Log Page Entries Supported: 1 00:40:28.757 Keep Alive: Not Supported 00:40:28.757 00:40:28.757 NVM Command Set Attributes 00:40:28.757 ========================== 00:40:28.757 Submission Queue Entry Size 00:40:28.757 Max: 1 00:40:28.757 Min: 1 00:40:28.757 Completion Queue Entry Size 00:40:28.757 Max: 1 00:40:28.757 Min: 1 00:40:28.757 Number of Namespaces: 0 00:40:28.757 Compare Command: Not Supported 00:40:28.757 Write Uncorrectable Command: Not Supported 00:40:28.757 Dataset Management Command: Not Supported 00:40:28.757 Write Zeroes Command: Not Supported 00:40:28.757 Set Features Save Field: Not Supported 00:40:28.757 Reservations: Not Supported 00:40:28.757 Timestamp: Not Supported 00:40:28.757 Copy: Not Supported 00:40:28.757 Volatile Write Cache: Not Present 00:40:28.757 Atomic Write Unit (Normal): 1 00:40:28.757 Atomic Write Unit (PFail): 1 00:40:28.757 Atomic Compare & Write Unit: 1 00:40:28.757 Fused Compare & Write: Not Supported 00:40:28.757 Scatter-Gather List 00:40:28.757 SGL Command Set: Supported 00:40:28.757 SGL Keyed: Not Supported 00:40:28.757 SGL Bit Bucket Descriptor: Not Supported 00:40:28.757 SGL Metadata Pointer: Not Supported 00:40:28.757 Oversized SGL: Not Supported 00:40:28.757 SGL Metadata Address: Not Supported 00:40:28.757 SGL Offset: Supported 00:40:28.757 Transport SGL Data Block: Not Supported 00:40:28.757 Replay Protected Memory Block: Not Supported 00:40:28.757 00:40:28.757 Firmware Slot Information 00:40:28.757 ========================= 00:40:28.757 Active slot: 0 00:40:28.757 00:40:28.757 00:40:28.757 Error Log 00:40:28.757 ========= 00:40:28.757 00:40:28.757 Active Namespaces 00:40:28.757 ================= 00:40:28.757 Discovery Log Page 00:40:28.757 ================== 00:40:28.757 Generation Counter: 2 00:40:28.757 Number of Records: 2 00:40:28.757 Record Format: 0 00:40:28.757 00:40:28.757 Discovery Log Entry 0 00:40:28.757 ---------------------- 00:40:28.757 Transport Type: 3 (TCP) 00:40:28.757 Address Family: 1 (IPv4) 00:40:28.757 Subsystem Type: 3 (Current Discovery Subsystem) 00:40:28.757 Entry Flags: 00:40:28.757 Duplicate Returned Information: 0 00:40:28.757 Explicit Persistent Connection Support for Discovery: 0 00:40:28.757 Transport Requirements: 00:40:28.757 Secure Channel: Not Specified 00:40:28.757 Port ID: 1 (0x0001) 00:40:28.757 Controller ID: 65535 (0xffff) 00:40:28.757 Admin Max SQ Size: 32 00:40:28.758 Transport Service Identifier: 4420 00:40:28.758 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:40:28.758 Transport Address: 10.0.0.1 00:40:28.758 Discovery Log Entry 1 00:40:28.758 ---------------------- 00:40:28.758 Transport Type: 3 (TCP) 00:40:28.758 Address Family: 1 (IPv4) 00:40:28.758 Subsystem Type: 2 (NVM Subsystem) 00:40:28.758 Entry Flags: 00:40:28.758 Duplicate Returned Information: 0 00:40:28.758 Explicit Persistent Connection Support for Discovery: 0 00:40:28.758 Transport Requirements: 00:40:28.758 Secure Channel: Not Specified 00:40:28.758 Port ID: 1 (0x0001) 00:40:28.758 Controller ID: 65535 (0xffff) 00:40:28.758 Admin Max SQ Size: 32 00:40:28.758 Transport Service Identifier: 4420 00:40:28.758 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:40:28.758 Transport Address: 10.0.0.1 00:40:28.758 20:39:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:28.758 get_feature(0x01) failed 00:40:28.758 get_feature(0x02) failed 00:40:28.758 get_feature(0x04) failed 00:40:28.758 ===================================================== 00:40:28.758 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:28.758 ===================================================== 00:40:28.758 Controller Capabilities/Features 00:40:28.758 ================================ 00:40:28.758 Vendor ID: 0000 00:40:28.758 Subsystem Vendor ID: 0000 00:40:28.758 Serial Number: b2dc33badde058f6ab7b 00:40:28.758 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:40:28.758 Firmware Version: 6.8.9-20 00:40:28.758 Recommended Arb Burst: 6 00:40:28.758 IEEE OUI Identifier: 00 00 00 00:40:28.758 Multi-path I/O 00:40:28.758 May have multiple subsystem ports: Yes 00:40:28.758 May have multiple controllers: Yes 00:40:28.758 Associated with SR-IOV VF: No 00:40:28.758 Max Data Transfer Size: Unlimited 00:40:28.758 Max Number of Namespaces: 1024 00:40:28.758 Max Number of I/O Queues: 128 00:40:28.758 NVMe Specification Version (VS): 1.3 00:40:28.758 NVMe Specification Version (Identify): 1.3 00:40:28.758 Maximum Queue Entries: 1024 00:40:28.758 Contiguous Queues Required: No 00:40:28.758 Arbitration Mechanisms Supported 00:40:28.758 Weighted Round Robin: Not Supported 00:40:28.758 Vendor Specific: Not Supported 00:40:28.758 Reset Timeout: 7500 ms 00:40:28.758 Doorbell Stride: 4 bytes 00:40:28.758 NVM Subsystem Reset: Not Supported 00:40:28.758 Command Sets Supported 00:40:28.758 NVM Command Set: Supported 00:40:28.758 Boot Partition: Not Supported 00:40:28.758 Memory Page Size Minimum: 4096 bytes 00:40:28.758 Memory Page Size Maximum: 4096 bytes 00:40:28.758 Persistent Memory Region: Not Supported 00:40:28.758 Optional Asynchronous Events Supported 00:40:28.758 Namespace Attribute Notices: Supported 00:40:28.758 Firmware Activation Notices: Not Supported 00:40:28.758 ANA Change Notices: Supported 00:40:28.758 PLE Aggregate Log Change Notices: Not Supported 00:40:28.758 LBA Status Info Alert Notices: Not Supported 00:40:28.758 EGE Aggregate Log Change Notices: Not Supported 00:40:28.758 Normal NVM Subsystem Shutdown event: Not Supported 00:40:28.758 Zone Descriptor Change Notices: Not Supported 00:40:28.758 Discovery Log Change Notices: Not Supported 00:40:28.758 Controller Attributes 00:40:28.758 128-bit Host Identifier: Supported 00:40:28.758 Non-Operational Permissive Mode: Not Supported 00:40:28.758 NVM Sets: Not Supported 00:40:28.758 Read Recovery Levels: Not Supported 00:40:28.758 Endurance Groups: Not Supported 00:40:28.758 Predictable Latency Mode: Not Supported 00:40:28.758 Traffic Based Keep ALive: Supported 00:40:28.758 Namespace Granularity: Not Supported 00:40:28.758 SQ Associations: Not Supported 00:40:28.758 UUID List: Not Supported 00:40:28.758 Multi-Domain Subsystem: Not Supported 00:40:28.758 Fixed Capacity Management: Not Supported 00:40:28.758 Variable Capacity Management: Not Supported 00:40:28.758 Delete Endurance Group: Not Supported 00:40:28.758 Delete NVM Set: Not Supported 00:40:28.758 Extended LBA Formats Supported: Not Supported 00:40:28.758 Flexible Data Placement Supported: Not Supported 00:40:28.758 00:40:28.758 Controller Memory Buffer Support 00:40:28.758 ================================ 00:40:28.758 Supported: No 00:40:28.758 00:40:28.758 Persistent Memory Region Support 00:40:28.758 ================================ 00:40:28.758 Supported: No 00:40:28.758 00:40:28.758 Admin Command Set Attributes 00:40:28.758 ============================ 00:40:28.758 Security Send/Receive: Not Supported 00:40:28.758 Format NVM: Not Supported 00:40:28.758 Firmware Activate/Download: Not Supported 00:40:28.758 Namespace Management: Not Supported 00:40:28.758 Device Self-Test: Not Supported 00:40:28.758 Directives: Not Supported 00:40:28.758 NVMe-MI: Not Supported 00:40:28.758 Virtualization Management: Not Supported 00:40:28.758 Doorbell Buffer Config: Not Supported 00:40:28.758 Get LBA Status Capability: Not Supported 00:40:28.758 Command & Feature Lockdown Capability: Not Supported 00:40:28.758 Abort Command Limit: 4 00:40:28.758 Async Event Request Limit: 4 00:40:28.758 Number of Firmware Slots: N/A 00:40:28.758 Firmware Slot 1 Read-Only: N/A 00:40:28.758 Firmware Activation Without Reset: N/A 00:40:28.758 Multiple Update Detection Support: N/A 00:40:28.758 Firmware Update Granularity: No Information Provided 00:40:28.758 Per-Namespace SMART Log: Yes 00:40:28.758 Asymmetric Namespace Access Log Page: Supported 00:40:28.758 ANA Transition Time : 10 sec 00:40:28.758 00:40:28.758 Asymmetric Namespace Access Capabilities 00:40:28.758 ANA Optimized State : Supported 00:40:28.758 ANA Non-Optimized State : Supported 00:40:28.758 ANA Inaccessible State : Supported 00:40:28.758 ANA Persistent Loss State : Supported 00:40:28.758 ANA Change State : Supported 00:40:28.758 ANAGRPID is not changed : No 00:40:28.758 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:40:28.758 00:40:28.758 ANA Group Identifier Maximum : 128 00:40:28.758 Number of ANA Group Identifiers : 128 00:40:28.758 Max Number of Allowed Namespaces : 1024 00:40:28.758 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:40:28.758 Command Effects Log Page: Supported 00:40:28.758 Get Log Page Extended Data: Supported 00:40:28.758 Telemetry Log Pages: Not Supported 00:40:28.758 Persistent Event Log Pages: Not Supported 00:40:28.758 Supported Log Pages Log Page: May Support 00:40:28.758 Commands Supported & Effects Log Page: Not Supported 00:40:28.758 Feature Identifiers & Effects Log Page:May Support 00:40:28.758 NVMe-MI Commands & Effects Log Page: May Support 00:40:28.758 Data Area 4 for Telemetry Log: Not Supported 00:40:28.758 Error Log Page Entries Supported: 128 00:40:28.758 Keep Alive: Supported 00:40:28.758 Keep Alive Granularity: 1000 ms 00:40:28.758 00:40:28.758 NVM Command Set Attributes 00:40:28.758 ========================== 00:40:28.758 Submission Queue Entry Size 00:40:28.758 Max: 64 00:40:28.758 Min: 64 00:40:28.758 Completion Queue Entry Size 00:40:28.758 Max: 16 00:40:28.758 Min: 16 00:40:28.758 Number of Namespaces: 1024 00:40:28.758 Compare Command: Not Supported 00:40:28.758 Write Uncorrectable Command: Not Supported 00:40:28.758 Dataset Management Command: Supported 00:40:28.758 Write Zeroes Command: Supported 00:40:28.758 Set Features Save Field: Not Supported 00:40:28.758 Reservations: Not Supported 00:40:28.758 Timestamp: Not Supported 00:40:28.758 Copy: Not Supported 00:40:28.758 Volatile Write Cache: Present 00:40:28.758 Atomic Write Unit (Normal): 1 00:40:28.758 Atomic Write Unit (PFail): 1 00:40:28.758 Atomic Compare & Write Unit: 1 00:40:28.758 Fused Compare & Write: Not Supported 00:40:28.758 Scatter-Gather List 00:40:28.758 SGL Command Set: Supported 00:40:28.758 SGL Keyed: Not Supported 00:40:28.758 SGL Bit Bucket Descriptor: Not Supported 00:40:28.758 SGL Metadata Pointer: Not Supported 00:40:28.758 Oversized SGL: Not Supported 00:40:28.758 SGL Metadata Address: Not Supported 00:40:28.759 SGL Offset: Supported 00:40:28.759 Transport SGL Data Block: Not Supported 00:40:28.759 Replay Protected Memory Block: Not Supported 00:40:28.759 00:40:28.759 Firmware Slot Information 00:40:28.759 ========================= 00:40:28.759 Active slot: 0 00:40:28.759 00:40:28.759 Asymmetric Namespace Access 00:40:28.759 =========================== 00:40:28.759 Change Count : 0 00:40:28.759 Number of ANA Group Descriptors : 1 00:40:28.759 ANA Group Descriptor : 0 00:40:28.759 ANA Group ID : 1 00:40:28.759 Number of NSID Values : 1 00:40:28.759 Change Count : 0 00:40:28.759 ANA State : 1 00:40:28.759 Namespace Identifier : 1 00:40:28.759 00:40:28.759 Commands Supported and Effects 00:40:28.759 ============================== 00:40:28.759 Admin Commands 00:40:28.759 -------------- 00:40:28.759 Get Log Page (02h): Supported 00:40:28.759 Identify (06h): Supported 00:40:28.759 Abort (08h): Supported 00:40:28.759 Set Features (09h): Supported 00:40:28.759 Get Features (0Ah): Supported 00:40:28.759 Asynchronous Event Request (0Ch): Supported 00:40:28.759 Keep Alive (18h): Supported 00:40:28.759 I/O Commands 00:40:28.759 ------------ 00:40:28.759 Flush (00h): Supported 00:40:28.759 Write (01h): Supported LBA-Change 00:40:28.759 Read (02h): Supported 00:40:28.759 Write Zeroes (08h): Supported LBA-Change 00:40:28.759 Dataset Management (09h): Supported 00:40:28.759 00:40:28.759 Error Log 00:40:28.759 ========= 00:40:28.759 Entry: 0 00:40:28.759 Error Count: 0x3 00:40:28.759 Submission Queue Id: 0x0 00:40:28.759 Command Id: 0x5 00:40:28.759 Phase Bit: 0 00:40:28.759 Status Code: 0x2 00:40:28.759 Status Code Type: 0x0 00:40:28.759 Do Not Retry: 1 00:40:29.019 Error Location: 0x28 00:40:29.019 LBA: 0x0 00:40:29.019 Namespace: 0x0 00:40:29.019 Vendor Log Page: 0x0 00:40:29.019 ----------- 00:40:29.019 Entry: 1 00:40:29.019 Error Count: 0x2 00:40:29.019 Submission Queue Id: 0x0 00:40:29.019 Command Id: 0x5 00:40:29.019 Phase Bit: 0 00:40:29.019 Status Code: 0x2 00:40:29.019 Status Code Type: 0x0 00:40:29.019 Do Not Retry: 1 00:40:29.019 Error Location: 0x28 00:40:29.019 LBA: 0x0 00:40:29.019 Namespace: 0x0 00:40:29.019 Vendor Log Page: 0x0 00:40:29.019 ----------- 00:40:29.019 Entry: 2 00:40:29.019 Error Count: 0x1 00:40:29.019 Submission Queue Id: 0x0 00:40:29.019 Command Id: 0x4 00:40:29.019 Phase Bit: 0 00:40:29.019 Status Code: 0x2 00:40:29.019 Status Code Type: 0x0 00:40:29.019 Do Not Retry: 1 00:40:29.019 Error Location: 0x28 00:40:29.019 LBA: 0x0 00:40:29.019 Namespace: 0x0 00:40:29.019 Vendor Log Page: 0x0 00:40:29.019 00:40:29.019 Number of Queues 00:40:29.019 ================ 00:40:29.019 Number of I/O Submission Queues: 128 00:40:29.019 Number of I/O Completion Queues: 128 00:40:29.019 00:40:29.019 ZNS Specific Controller Data 00:40:29.019 ============================ 00:40:29.019 Zone Append Size Limit: 0 00:40:29.019 00:40:29.019 00:40:29.019 Active Namespaces 00:40:29.019 ================= 00:40:29.019 get_feature(0x05) failed 00:40:29.019 Namespace ID:1 00:40:29.019 Command Set Identifier: NVM (00h) 00:40:29.019 Deallocate: Supported 00:40:29.019 Deallocated/Unwritten Error: Not Supported 00:40:29.019 Deallocated Read Value: Unknown 00:40:29.019 Deallocate in Write Zeroes: Not Supported 00:40:29.019 Deallocated Guard Field: 0xFFFF 00:40:29.019 Flush: Supported 00:40:29.019 Reservation: Not Supported 00:40:29.019 Namespace Sharing Capabilities: Multiple Controllers 00:40:29.019 Size (in LBAs): 1310720 (5GiB) 00:40:29.019 Capacity (in LBAs): 1310720 (5GiB) 00:40:29.019 Utilization (in LBAs): 1310720 (5GiB) 00:40:29.019 UUID: 040d48d4-4711-4d18-88ed-7c198e930100 00:40:29.019 Thin Provisioning: Not Supported 00:40:29.019 Per-NS Atomic Units: Yes 00:40:29.019 Atomic Boundary Size (Normal): 0 00:40:29.019 Atomic Boundary Size (PFail): 0 00:40:29.019 Atomic Boundary Offset: 0 00:40:29.019 NGUID/EUI64 Never Reused: No 00:40:29.019 ANA group ID: 1 00:40:29.019 Namespace Write Protected: No 00:40:29.019 Number of LBA Formats: 1 00:40:29.019 Current LBA Format: LBA Format #00 00:40:29.019 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:40:29.019 00:40:29.019 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:40:29.019 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:40:29.019 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:40:29.019 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:29.019 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:40:29.019 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:29.019 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:29.019 rmmod nvme_tcp 00:40:29.019 rmmod nvme_fabrics 00:40:29.019 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:29.019 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:40:29.019 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:40:29.019 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:40:29.019 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:40:29.019 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:40:29.019 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:40:29.019 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:40:29.019 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-save 00:40:29.019 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:40:29.019 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-restore 00:40:29.019 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:29.019 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:40:29.019 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:40:29.019 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:40:29.019 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:40:29.019 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:40:29.019 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:40:29.019 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:40:29.019 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:40:29.019 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:40:29.019 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:40:29.019 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:40:29.019 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:40:29.279 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:40:29.279 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:40:29.279 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:40:29.279 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:29.279 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:29.279 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:29.279 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:40:29.279 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:40:29.279 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:40:29.279 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:40:29.279 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:29.279 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:29.279 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:40:29.279 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:29.279 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:40:29.279 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:40:29.280 20:39:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:40:29.848 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:40:30.107 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:40:30.107 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:40:30.107 ************************************ 00:40:30.107 END TEST nvmf_identify_kernel_target 00:40:30.107 ************************************ 00:40:30.107 00:40:30.107 real 0m3.199s 00:40:30.107 user 0m1.122s 00:40:30.107 sys 0m1.473s 00:40:30.107 20:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:30.107 20:39:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:40:30.107 20:39:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:40:30.107 20:39:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:40:30.107 20:39:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:30.107 20:39:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:40:30.107 ************************************ 00:40:30.107 START TEST nvmf_auth_host 00:40:30.107 ************************************ 00:40:30.107 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:40:30.369 * Looking for test storage... 00:40:30.369 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:40:30.369 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:30.369 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:40:30.369 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:30.369 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:30.369 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:30.369 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:30.369 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:30.369 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:40:30.369 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:40:30.369 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:40:30.369 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:40:30.369 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:40:30.369 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:40:30.369 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:40:30.369 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:30.369 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:40:30.369 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:40:30.369 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:30.369 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:30.369 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:40:30.369 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:40:30.369 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:30.369 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:40:30.369 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:40:30.369 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:40:30.369 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:40:30.369 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:30.369 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:40:30.369 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:40:30.369 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:30.369 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:30.369 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:40:30.369 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:30.369 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:30.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:30.369 --rc genhtml_branch_coverage=1 00:40:30.369 --rc genhtml_function_coverage=1 00:40:30.369 --rc genhtml_legend=1 00:40:30.369 --rc geninfo_all_blocks=1 00:40:30.369 --rc geninfo_unexecuted_blocks=1 00:40:30.369 00:40:30.369 ' 00:40:30.369 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:30.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:30.369 --rc genhtml_branch_coverage=1 00:40:30.369 --rc genhtml_function_coverage=1 00:40:30.369 --rc genhtml_legend=1 00:40:30.369 --rc geninfo_all_blocks=1 00:40:30.369 --rc geninfo_unexecuted_blocks=1 00:40:30.369 00:40:30.369 ' 00:40:30.369 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:30.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:30.369 --rc genhtml_branch_coverage=1 00:40:30.369 --rc genhtml_function_coverage=1 00:40:30.369 --rc genhtml_legend=1 00:40:30.369 --rc geninfo_all_blocks=1 00:40:30.369 --rc geninfo_unexecuted_blocks=1 00:40:30.370 00:40:30.370 ' 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:30.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:30.370 --rc genhtml_branch_coverage=1 00:40:30.370 --rc genhtml_function_coverage=1 00:40:30.370 --rc genhtml_legend=1 00:40:30.370 --rc geninfo_all_blocks=1 00:40:30.370 --rc geninfo_unexecuted_blocks=1 00:40:30.370 00:40:30.370 ' 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:30.370 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # nvmf_veth_init 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:40:30.370 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:40:30.371 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:40:30.371 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:40:30.371 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:40:30.371 Cannot find device "nvmf_init_br" 00:40:30.371 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:40:30.371 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:40:30.371 Cannot find device "nvmf_init_br2" 00:40:30.371 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:40:30.371 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:40:30.630 Cannot find device "nvmf_tgt_br" 00:40:30.630 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:40:30.630 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:40:30.630 Cannot find device "nvmf_tgt_br2" 00:40:30.630 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:40:30.630 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:40:30.630 Cannot find device "nvmf_init_br" 00:40:30.630 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:40:30.630 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:40:30.630 Cannot find device "nvmf_init_br2" 00:40:30.630 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:40:30.630 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:40:30.630 Cannot find device "nvmf_tgt_br" 00:40:30.630 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:40:30.630 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:40:30.630 Cannot find device "nvmf_tgt_br2" 00:40:30.630 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:40:30.630 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:40:30.630 Cannot find device "nvmf_br" 00:40:30.630 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:40:30.630 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:40:30.630 Cannot find device "nvmf_init_if" 00:40:30.630 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:40:30.630 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:40:30.630 Cannot find device "nvmf_init_if2" 00:40:30.630 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:40:30.630 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:40:30.630 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:30.630 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:40:30.630 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:40:30.630 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:30.630 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:40:30.630 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:40:30.630 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:40:30.630 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:40:30.630 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:40:30.630 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:40:30.630 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:40:30.630 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:40:30.630 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:40:30.630 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:40:30.630 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:40:30.630 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:40:30.630 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:40:30.630 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:40:30.630 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:40:30.630 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:40:30.630 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:40:30.630 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:40:30.630 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:40:30.890 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:40:30.890 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:40:30.890 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:40:30.890 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:40:30.890 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:40:30.890 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:40:30.890 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:40:30.890 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:40:30.890 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:40:30.890 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:40:30.890 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:40:30.890 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:40:30.890 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:40:30.890 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:40:30.890 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:40:30.890 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:40:30.890 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:40:30.890 00:40:30.890 --- 10.0.0.3 ping statistics --- 00:40:30.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:30.890 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:40:30.890 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:40:30.890 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:40:30.890 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:40:30.890 00:40:30.890 --- 10.0.0.4 ping statistics --- 00:40:30.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:30.890 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:40:30.890 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:40:30.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:30.890 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:40:30.890 00:40:30.890 --- 10.0.0.1 ping statistics --- 00:40:30.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:30.890 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:40:30.890 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:40:30.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:30.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:40:30.890 00:40:30.890 --- 10.0.0.2 ping statistics --- 00:40:30.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:30.890 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:40:30.890 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:30.890 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # return 0 00:40:30.890 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:40:30.890 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:30.890 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:40:30.890 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:40:30.890 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:30.890 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:40:30.890 20:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:40:30.890 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:40:30.890 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:40:30.891 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:30.891 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:30.891 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=91706 00:40:30.891 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 91706 00:40:30.891 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:40:30.891 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 91706 ']' 00:40:30.891 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:30.891 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:30.891 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:30.891 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:30.891 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:31.459 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:31.459 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:40:31.459 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:40:31.459 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:31.459 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:31.459 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:31.459 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:40:31.459 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:40:31.459 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:40:31.459 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:40:31.459 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:40:31.459 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:40:31.459 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:40:31.459 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=92279e0626f3bb81e62b64670a2d663b 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.gGg 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 92279e0626f3bb81e62b64670a2d663b 0 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 92279e0626f3bb81e62b64670a2d663b 0 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=92279e0626f3bb81e62b64670a2d663b 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.gGg 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.gGg 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.gGg 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=4b742baec4f67a5a2fb23e8761709ef59ce768350a7fcca0c65b0349daeae0c8 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.4Cf 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 4b742baec4f67a5a2fb23e8761709ef59ce768350a7fcca0c65b0349daeae0c8 3 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 4b742baec4f67a5a2fb23e8761709ef59ce768350a7fcca0c65b0349daeae0c8 3 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=4b742baec4f67a5a2fb23e8761709ef59ce768350a7fcca0c65b0349daeae0c8 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.4Cf 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.4Cf 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.4Cf 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=9bb5868b44f79028e401344f08dad5b9f62631e232df5c03 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.SKU 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 9bb5868b44f79028e401344f08dad5b9f62631e232df5c03 0 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 9bb5868b44f79028e401344f08dad5b9f62631e232df5c03 0 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=9bb5868b44f79028e401344f08dad5b9f62631e232df5c03 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.SKU 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.SKU 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.SKU 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:40:31.460 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=d67e4b3ac2f676e9d6fe675371c26af686b930512056799e 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.Yeq 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key d67e4b3ac2f676e9d6fe675371c26af686b930512056799e 2 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 d67e4b3ac2f676e9d6fe675371c26af686b930512056799e 2 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=d67e4b3ac2f676e9d6fe675371c26af686b930512056799e 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.Yeq 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.Yeq 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Yeq 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=e9bd3b44cd68fe9d975b82b809b29bf0 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.toZ 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key e9bd3b44cd68fe9d975b82b809b29bf0 1 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 e9bd3b44cd68fe9d975b82b809b29bf0 1 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=e9bd3b44cd68fe9d975b82b809b29bf0 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.toZ 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.toZ 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.toZ 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=e1b7a46be3c705ec825d96d9125c9fa5 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:40:31.720 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.IQs 00:40:31.721 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key e1b7a46be3c705ec825d96d9125c9fa5 1 00:40:31.721 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 e1b7a46be3c705ec825d96d9125c9fa5 1 00:40:31.721 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:40:31.721 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:40:31.721 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=e1b7a46be3c705ec825d96d9125c9fa5 00:40:31.721 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:40:31.721 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:40:31.721 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.IQs 00:40:31.721 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.IQs 00:40:31.721 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.IQs 00:40:31.721 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:40:31.721 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:40:31.721 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:40:31.721 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:40:31.721 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:40:31.721 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:40:31.721 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:40:31.721 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=1729060e42f542f3e11e9f5eb0c2d60366fcc882060ffa85 00:40:31.721 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:40:31.721 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.SVV 00:40:31.721 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 1729060e42f542f3e11e9f5eb0c2d60366fcc882060ffa85 2 00:40:31.721 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 1729060e42f542f3e11e9f5eb0c2d60366fcc882060ffa85 2 00:40:31.721 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:40:31.721 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:40:31.721 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=1729060e42f542f3e11e9f5eb0c2d60366fcc882060ffa85 00:40:31.721 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:40:31.721 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:40:31.721 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.SVV 00:40:31.721 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.SVV 00:40:31.721 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.SVV 00:40:31.721 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:40:31.721 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:40:31.721 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:40:31.980 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:40:31.980 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:40:31.980 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:40:31.980 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:40:31.980 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=f073cce1a8ec5080a941ce7660942960 00:40:31.980 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:40:31.980 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.iFM 00:40:31.980 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key f073cce1a8ec5080a941ce7660942960 0 00:40:31.980 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 f073cce1a8ec5080a941ce7660942960 0 00:40:31.980 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:40:31.980 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:40:31.980 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=f073cce1a8ec5080a941ce7660942960 00:40:31.980 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:40:31.980 20:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:40:31.980 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.iFM 00:40:31.980 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.iFM 00:40:31.980 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.iFM 00:40:31.980 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:40:31.980 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:40:31.980 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:40:31.980 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:40:31.980 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:40:31.980 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:40:31.980 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:40:31.980 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=351b336747f56dba10f6a9de445b4406d2d4be8fec4adfe7ec59c30fc3273689 00:40:31.980 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:40:31.980 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.VKW 00:40:31.980 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 351b336747f56dba10f6a9de445b4406d2d4be8fec4adfe7ec59c30fc3273689 3 00:40:31.980 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 351b336747f56dba10f6a9de445b4406d2d4be8fec4adfe7ec59c30fc3273689 3 00:40:31.980 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:40:31.980 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:40:31.980 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=351b336747f56dba10f6a9de445b4406d2d4be8fec4adfe7ec59c30fc3273689 00:40:31.980 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:40:31.980 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:40:31.980 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.VKW 00:40:31.981 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.VKW 00:40:31.981 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.VKW 00:40:31.981 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:40:31.981 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 91706 00:40:31.981 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 91706 ']' 00:40:31.981 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:31.981 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:31.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:31.981 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:31.981 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:31.981 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:32.240 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:32.240 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:40:32.240 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:40:32.240 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.gGg 00:40:32.240 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:32.240 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:32.240 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:32.240 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.4Cf ]] 00:40:32.240 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4Cf 00:40:32.240 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:32.240 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:32.240 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:32.240 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:40:32.240 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.SKU 00:40:32.240 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:32.240 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:32.240 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:32.240 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Yeq ]] 00:40:32.240 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Yeq 00:40:32.240 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:32.240 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:32.240 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:32.240 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:40:32.240 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.toZ 00:40:32.240 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:32.240 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:32.240 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:32.240 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.IQs ]] 00:40:32.240 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.IQs 00:40:32.240 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:32.240 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:32.240 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:32.240 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:40:32.240 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.SVV 00:40:32.240 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:32.241 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:32.241 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:32.241 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.iFM ]] 00:40:32.241 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.iFM 00:40:32.241 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:32.241 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:32.241 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:32.241 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:40:32.241 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.VKW 00:40:32.241 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:32.241 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:32.241 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:32.241 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:40:32.241 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:40:32.241 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:40:32.241 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:32.241 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:32.241 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:32.241 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:32.241 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:32.241 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:32.241 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:32.241 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:32.241 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:32.241 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:32.241 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:40:32.241 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:40:32.241 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:40:32.241 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:40:32.241 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:40:32.241 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:40:32.241 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:40:32.241 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:40:32.241 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:40:32.500 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:40:32.500 20:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:40:32.759 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:40:32.759 Waiting for block devices as requested 00:40:32.759 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:40:32.759 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:40:33.696 20:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:40:33.696 20:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:40:33.696 20:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:40:33.696 20:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:40:33.696 20:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:40:33.696 20:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:40:33.696 20:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:40:33.696 20:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:40:33.696 20:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:40:33.977 No valid GPT data, bailing 00:40:33.977 20:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:40:33.977 20:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:40:33.977 20:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:40:33.977 20:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:40:33.977 20:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:40:33.977 20:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n2 ]] 00:40:33.977 20:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n2 00:40:33.977 20:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:40:33.977 20:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:40:33.977 20:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:40:33.977 20:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n2 00:40:33.977 20:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:40:33.977 20:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:40:33.977 No valid GPT data, bailing 00:40:33.977 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:40:33.977 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:40:33.977 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:40:33.977 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n2 00:40:33.977 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:40:33.977 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n3 ]] 00:40:33.977 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n3 00:40:33.977 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:40:33.977 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:40:33.977 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:40:33.977 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n3 00:40:33.977 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:40:33.977 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:40:33.977 No valid GPT data, bailing 00:40:33.977 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:40:33.977 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:40:33.977 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:40:33.977 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n3 00:40:33.977 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:40:33.977 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme1n1 ]] 00:40:33.977 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme1n1 00:40:33.977 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:40:33.977 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:40:33.977 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:40:33.977 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme1n1 00:40:33.977 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:40:33.977 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:40:33.977 No valid GPT data, bailing 00:40:33.977 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:40:33.977 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:40:33.977 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:40:33.977 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme1n1 00:40:33.977 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme1n1 ]] 00:40:33.977 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:40:33.977 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:40:33.977 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:40:33.977 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:40:33.978 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:40:33.978 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme1n1 00:40:33.978 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:40:33.978 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:40:33.978 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo tcp 00:40:33.978 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:40:33.978 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:40:33.978 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid=e1b29161-2f6e-4e88-9903-c5274a848703 -a 10.0.0.1 -t tcp -s 4420 00:40:34.241 00:40:34.241 Discovery Log Number of Records 2, Generation counter 2 00:40:34.241 =====Discovery Log Entry 0====== 00:40:34.241 trtype: tcp 00:40:34.241 adrfam: ipv4 00:40:34.241 subtype: current discovery subsystem 00:40:34.241 treq: not specified, sq flow control disable supported 00:40:34.241 portid: 1 00:40:34.241 trsvcid: 4420 00:40:34.241 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:40:34.241 traddr: 10.0.0.1 00:40:34.241 eflags: none 00:40:34.241 sectype: none 00:40:34.241 =====Discovery Log Entry 1====== 00:40:34.241 trtype: tcp 00:40:34.241 adrfam: ipv4 00:40:34.241 subtype: nvme subsystem 00:40:34.241 treq: not specified, sq flow control disable supported 00:40:34.241 portid: 1 00:40:34.241 trsvcid: 4420 00:40:34.241 subnqn: nqn.2024-02.io.spdk:cnode0 00:40:34.241 traddr: 10.0.0.1 00:40:34.241 eflags: none 00:40:34.241 sectype: none 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWJiNTg2OGI0NGY3OTAyOGU0MDEzNDRmMDhkYWQ1YjlmNjI2MzFlMjMyZGY1YzAz5Sf68w==: 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWJiNTg2OGI0NGY3OTAyOGU0MDEzNDRmMDhkYWQ1YjlmNjI2MzFlMjMyZGY1YzAz5Sf68w==: 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: ]] 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:34.241 nvme0n1 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:34.241 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTIyNzllMDYyNmYzYmI4MWU2MmI2NDY3MGEyZDY2M2IIVF/u: 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTIyNzllMDYyNmYzYmI4MWU2MmI2NDY3MGEyZDY2M2IIVF/u: 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: ]] 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:34.502 nvme0n1 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWJiNTg2OGI0NGY3OTAyOGU0MDEzNDRmMDhkYWQ1YjlmNjI2MzFlMjMyZGY1YzAz5Sf68w==: 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWJiNTg2OGI0NGY3OTAyOGU0MDEzNDRmMDhkYWQ1YjlmNjI2MzFlMjMyZGY1YzAz5Sf68w==: 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: ]] 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:34.502 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:34.503 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:40:34.503 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:34.762 nvme0n1 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTliZDNiNDRjZDY4ZmU5ZDk3NWI4MmI4MDliMjliZjC4rpN1: 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTliZDNiNDRjZDY4ZmU5ZDk3NWI4MmI4MDliMjliZjC4rpN1: 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: ]] 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:34.762 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:34.763 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:40:34.763 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:34.763 20:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:35.022 nvme0n1 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTcyOTA2MGU0MmY1NDJmM2UxMWU5ZjVlYjBjMmQ2MDM2NmZjYzg4MjA2MGZmYTg1K1ND4w==: 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTcyOTA2MGU0MmY1NDJmM2UxMWU5ZjVlYjBjMmQ2MDM2NmZjYzg4MjA2MGZmYTg1K1ND4w==: 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: ]] 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:35.022 nvme0n1 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:35.022 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:35.282 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:35.282 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:35.282 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:40:35.282 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:35.282 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:40:35.282 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzUxYjMzNjc0N2Y1NmRiYTEwZjZhOWRlNDQ1YjQ0MDZkMmQ0YmU4ZmVjNGFkZmU3ZWM1OWMzMGZjMzI3MzY4Oe4GP/E=: 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzUxYjMzNjc0N2Y1NmRiYTEwZjZhOWRlNDQ1YjQ0MDZkMmQ0YmU4ZmVjNGFkZmU3ZWM1OWMzMGZjMzI3MzY4Oe4GP/E=: 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:35.283 nvme0n1 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTIyNzllMDYyNmYzYmI4MWU2MmI2NDY3MGEyZDY2M2IIVF/u: 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:40:35.283 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:40:35.541 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTIyNzllMDYyNmYzYmI4MWU2MmI2NDY3MGEyZDY2M2IIVF/u: 00:40:35.541 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: ]] 00:40:35.541 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: 00:40:35.541 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:40:35.541 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:35.541 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:40:35.541 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:40:35.541 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:40:35.541 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:35.541 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:40:35.541 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:35.541 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:35.541 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:35.541 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:35.541 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:35.541 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:35.541 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:35.541 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:35.541 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:35.541 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:35.541 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:35.541 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:35.541 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:35.541 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:35.541 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:40:35.541 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:35.541 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:35.800 nvme0n1 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWJiNTg2OGI0NGY3OTAyOGU0MDEzNDRmMDhkYWQ1YjlmNjI2MzFlMjMyZGY1YzAz5Sf68w==: 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWJiNTg2OGI0NGY3OTAyOGU0MDEzNDRmMDhkYWQ1YjlmNjI2MzFlMjMyZGY1YzAz5Sf68w==: 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: ]] 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:35.800 20:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:36.059 nvme0n1 00:40:36.059 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:36.059 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:36.059 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:36.059 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:36.059 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:36.059 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:36.059 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:36.059 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:36.059 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:36.059 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:36.059 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:36.059 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:36.059 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:40:36.059 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:36.059 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:40:36.059 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:40:36.059 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:40:36.059 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTliZDNiNDRjZDY4ZmU5ZDk3NWI4MmI4MDliMjliZjC4rpN1: 00:40:36.059 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: 00:40:36.059 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:40:36.059 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:40:36.059 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTliZDNiNDRjZDY4ZmU5ZDk3NWI4MmI4MDliMjliZjC4rpN1: 00:40:36.059 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: ]] 00:40:36.059 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: 00:40:36.059 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:40:36.059 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:36.059 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:40:36.059 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:40:36.059 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:40:36.059 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:36.060 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:40:36.060 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:36.060 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:36.060 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:36.060 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:36.060 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:36.060 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:36.060 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:36.060 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:36.060 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:36.060 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:36.060 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:36.060 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:36.060 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:36.060 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:36.060 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:40:36.060 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:36.060 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:36.060 nvme0n1 00:40:36.060 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:36.060 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:36.060 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:36.060 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:36.060 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTcyOTA2MGU0MmY1NDJmM2UxMWU5ZjVlYjBjMmQ2MDM2NmZjYzg4MjA2MGZmYTg1K1ND4w==: 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTcyOTA2MGU0MmY1NDJmM2UxMWU5ZjVlYjBjMmQ2MDM2NmZjYzg4MjA2MGZmYTg1K1ND4w==: 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: ]] 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:36.319 nvme0n1 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:36.319 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:36.320 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:36.320 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:36.320 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:36.320 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:36.320 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:36.320 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:36.320 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:36.320 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzUxYjMzNjc0N2Y1NmRiYTEwZjZhOWRlNDQ1YjQ0MDZkMmQ0YmU4ZmVjNGFkZmU3ZWM1OWMzMGZjMzI3MzY4Oe4GP/E=: 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzUxYjMzNjc0N2Y1NmRiYTEwZjZhOWRlNDQ1YjQ0MDZkMmQ0YmU4ZmVjNGFkZmU3ZWM1OWMzMGZjMzI3MzY4Oe4GP/E=: 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:36.579 nvme0n1 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTIyNzllMDYyNmYzYmI4MWU2MmI2NDY3MGEyZDY2M2IIVF/u: 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:40:36.579 20:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:40:37.148 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTIyNzllMDYyNmYzYmI4MWU2MmI2NDY3MGEyZDY2M2IIVF/u: 00:40:37.148 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: ]] 00:40:37.148 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: 00:40:37.148 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:40:37.148 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:37.148 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:40:37.148 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:40:37.148 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:40:37.148 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:37.148 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:40:37.148 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.148 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:37.148 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.148 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:37.148 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:37.148 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:37.148 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:37.148 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:37.148 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:37.148 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:37.148 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:37.148 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:37.148 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:37.148 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:37.148 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:40:37.148 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.148 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:37.408 nvme0n1 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWJiNTg2OGI0NGY3OTAyOGU0MDEzNDRmMDhkYWQ1YjlmNjI2MzFlMjMyZGY1YzAz5Sf68w==: 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWJiNTg2OGI0NGY3OTAyOGU0MDEzNDRmMDhkYWQ1YjlmNjI2MzFlMjMyZGY1YzAz5Sf68w==: 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: ]] 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.408 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:37.668 nvme0n1 00:40:37.668 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.668 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:37.668 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:37.668 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.668 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:37.668 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.668 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:37.668 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:37.668 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.668 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:37.668 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.668 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:37.668 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:40:37.668 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:37.668 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:40:37.668 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:40:37.668 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:40:37.668 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTliZDNiNDRjZDY4ZmU5ZDk3NWI4MmI4MDliMjliZjC4rpN1: 00:40:37.668 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: 00:40:37.668 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:40:37.669 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:40:37.669 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTliZDNiNDRjZDY4ZmU5ZDk3NWI4MmI4MDliMjliZjC4rpN1: 00:40:37.669 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: ]] 00:40:37.669 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: 00:40:37.669 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:40:37.669 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:37.669 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:40:37.669 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:40:37.669 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:40:37.669 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:37.669 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:40:37.669 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.669 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:37.669 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.669 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:37.669 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:37.669 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:37.669 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:37.669 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:37.669 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:37.669 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:37.669 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:37.669 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:37.669 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:37.669 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:37.669 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:40:37.669 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.669 20:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:37.928 nvme0n1 00:40:37.928 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.928 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:37.928 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:37.928 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.928 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:37.928 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.928 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:37.928 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:37.928 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.928 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:37.928 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.928 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:37.928 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:40:37.928 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:37.928 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:40:37.928 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:40:37.928 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:40:37.928 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTcyOTA2MGU0MmY1NDJmM2UxMWU5ZjVlYjBjMmQ2MDM2NmZjYzg4MjA2MGZmYTg1K1ND4w==: 00:40:37.928 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: 00:40:37.928 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:40:37.928 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:40:37.928 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTcyOTA2MGU0MmY1NDJmM2UxMWU5ZjVlYjBjMmQ2MDM2NmZjYzg4MjA2MGZmYTg1K1ND4w==: 00:40:37.928 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: ]] 00:40:37.928 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: 00:40:37.928 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:40:37.929 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:37.929 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:40:37.929 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:40:37.929 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:40:37.929 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:37.929 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:40:37.929 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.929 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:37.929 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:37.929 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:37.929 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:37.929 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:37.929 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:37.929 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:37.929 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:37.929 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:37.929 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:37.929 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:37.929 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:37.929 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:37.929 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:40:37.929 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:37.929 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:38.188 nvme0n1 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzUxYjMzNjc0N2Y1NmRiYTEwZjZhOWRlNDQ1YjQ0MDZkMmQ0YmU4ZmVjNGFkZmU3ZWM1OWMzMGZjMzI3MzY4Oe4GP/E=: 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzUxYjMzNjc0N2Y1NmRiYTEwZjZhOWRlNDQ1YjQ0MDZkMmQ0YmU4ZmVjNGFkZmU3ZWM1OWMzMGZjMzI3MzY4Oe4GP/E=: 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:38.188 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:38.189 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:38.189 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:40:38.189 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:38.189 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:38.448 nvme0n1 00:40:38.448 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:38.448 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:38.448 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:38.448 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:38.448 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:38.448 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:38.448 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:38.448 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:38.448 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:38.448 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:38.448 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:38.448 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:40:38.448 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:38.448 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:40:38.448 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:38.448 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:40:38.448 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:40:38.448 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:40:38.448 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTIyNzllMDYyNmYzYmI4MWU2MmI2NDY3MGEyZDY2M2IIVF/u: 00:40:38.448 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: 00:40:38.449 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:40:38.449 20:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTIyNzllMDYyNmYzYmI4MWU2MmI2NDY3MGEyZDY2M2IIVF/u: 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: ]] 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:40.352 nvme0n1 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWJiNTg2OGI0NGY3OTAyOGU0MDEzNDRmMDhkYWQ1YjlmNjI2MzFlMjMyZGY1YzAz5Sf68w==: 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWJiNTg2OGI0NGY3OTAyOGU0MDEzNDRmMDhkYWQ1YjlmNjI2MzFlMjMyZGY1YzAz5Sf68w==: 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: ]] 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:40.352 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:40:40.353 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:40:40.353 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:40:40.353 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:40.353 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:40:40.353 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:40.353 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:40.353 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:40.353 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:40.353 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:40.353 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:40.353 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:40.353 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:40.353 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:40.353 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:40.353 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:40.353 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:40.353 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:40.353 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:40.353 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:40:40.353 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:40.353 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:40.612 nvme0n1 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTliZDNiNDRjZDY4ZmU5ZDk3NWI4MmI4MDliMjliZjC4rpN1: 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTliZDNiNDRjZDY4ZmU5ZDk3NWI4MmI4MDliMjliZjC4rpN1: 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: ]] 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:40.612 20:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:41.181 nvme0n1 00:40:41.181 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:41.181 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:41.181 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:41.181 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:41.181 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:41.181 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:41.181 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:41.181 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:41.181 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:41.181 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:41.181 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:41.181 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:41.181 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:40:41.181 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:41.181 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:40:41.181 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:40:41.181 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:40:41.181 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTcyOTA2MGU0MmY1NDJmM2UxMWU5ZjVlYjBjMmQ2MDM2NmZjYzg4MjA2MGZmYTg1K1ND4w==: 00:40:41.181 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: 00:40:41.181 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:40:41.181 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:40:41.181 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTcyOTA2MGU0MmY1NDJmM2UxMWU5ZjVlYjBjMmQ2MDM2NmZjYzg4MjA2MGZmYTg1K1ND4w==: 00:40:41.181 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: ]] 00:40:41.181 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: 00:40:41.181 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:40:41.181 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:41.181 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:40:41.181 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:40:41.181 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:40:41.181 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:41.181 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:40:41.181 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:41.181 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:41.181 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:41.181 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:41.181 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:41.181 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:41.181 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:41.181 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:41.182 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:41.182 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:41.182 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:41.182 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:41.182 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:41.182 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:41.182 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:40:41.182 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:41.182 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:41.441 nvme0n1 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzUxYjMzNjc0N2Y1NmRiYTEwZjZhOWRlNDQ1YjQ0MDZkMmQ0YmU4ZmVjNGFkZmU3ZWM1OWMzMGZjMzI3MzY4Oe4GP/E=: 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzUxYjMzNjc0N2Y1NmRiYTEwZjZhOWRlNDQ1YjQ0MDZkMmQ0YmU4ZmVjNGFkZmU3ZWM1OWMzMGZjMzI3MzY4Oe4GP/E=: 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:41.441 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:41.700 nvme0n1 00:40:41.700 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:41.700 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:41.700 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:41.700 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:41.700 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:41.700 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:41.960 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:41.960 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:41.960 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:41.960 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:41.960 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:41.960 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:40:41.960 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:41.960 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:40:41.960 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:41.960 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:40:41.960 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:40:41.960 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:40:41.960 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTIyNzllMDYyNmYzYmI4MWU2MmI2NDY3MGEyZDY2M2IIVF/u: 00:40:41.960 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: 00:40:41.960 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:40:41.960 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:40:41.960 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTIyNzllMDYyNmYzYmI4MWU2MmI2NDY3MGEyZDY2M2IIVF/u: 00:40:41.960 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: ]] 00:40:41.960 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: 00:40:41.960 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:40:41.960 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:41.960 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:40:41.960 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:40:41.960 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:40:41.960 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:41.960 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:40:41.960 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:41.960 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:41.960 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:41.960 20:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:41.960 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:41.960 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:41.960 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:41.960 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:41.960 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:41.960 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:41.960 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:41.960 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:41.960 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:41.960 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:41.960 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:40:41.960 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:41.960 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:42.529 nvme0n1 00:40:42.529 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:42.529 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:42.529 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:42.529 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:42.529 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:42.529 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:42.529 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:42.529 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:42.529 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:42.529 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:42.529 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:42.529 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:42.530 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:40:42.530 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:42.530 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:40:42.530 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:40:42.530 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:40:42.530 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWJiNTg2OGI0NGY3OTAyOGU0MDEzNDRmMDhkYWQ1YjlmNjI2MzFlMjMyZGY1YzAz5Sf68w==: 00:40:42.530 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: 00:40:42.530 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:40:42.530 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:40:42.530 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWJiNTg2OGI0NGY3OTAyOGU0MDEzNDRmMDhkYWQ1YjlmNjI2MzFlMjMyZGY1YzAz5Sf68w==: 00:40:42.530 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: ]] 00:40:42.530 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: 00:40:42.530 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:40:42.530 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:42.530 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:40:42.530 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:40:42.530 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:40:42.530 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:42.530 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:40:42.530 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:42.530 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:42.530 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:42.530 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:42.530 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:42.530 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:42.530 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:42.530 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:42.530 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:42.530 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:42.530 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:42.530 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:42.530 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:42.530 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:42.530 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:40:42.530 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:42.530 20:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:43.099 nvme0n1 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTliZDNiNDRjZDY4ZmU5ZDk3NWI4MmI4MDliMjliZjC4rpN1: 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTliZDNiNDRjZDY4ZmU5ZDk3NWI4MmI4MDliMjliZjC4rpN1: 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: ]] 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:43.099 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:43.666 nvme0n1 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTcyOTA2MGU0MmY1NDJmM2UxMWU5ZjVlYjBjMmQ2MDM2NmZjYzg4MjA2MGZmYTg1K1ND4w==: 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTcyOTA2MGU0MmY1NDJmM2UxMWU5ZjVlYjBjMmQ2MDM2NmZjYzg4MjA2MGZmYTg1K1ND4w==: 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: ]] 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:43.666 20:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:44.240 nvme0n1 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzUxYjMzNjc0N2Y1NmRiYTEwZjZhOWRlNDQ1YjQ0MDZkMmQ0YmU4ZmVjNGFkZmU3ZWM1OWMzMGZjMzI3MzY4Oe4GP/E=: 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzUxYjMzNjc0N2Y1NmRiYTEwZjZhOWRlNDQ1YjQ0MDZkMmQ0YmU4ZmVjNGFkZmU3ZWM1OWMzMGZjMzI3MzY4Oe4GP/E=: 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:44.240 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:44.808 nvme0n1 00:40:44.808 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:44.808 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:44.808 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:44.808 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:44.808 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:44.808 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:44.808 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:44.808 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:44.808 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:44.808 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:44.808 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:44.809 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:40:44.809 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:40:44.809 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:44.809 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:40:44.809 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:44.809 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:40:44.809 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:40:44.809 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:40:44.809 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTIyNzllMDYyNmYzYmI4MWU2MmI2NDY3MGEyZDY2M2IIVF/u: 00:40:44.809 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: 00:40:44.809 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:40:44.809 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:40:44.809 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTIyNzllMDYyNmYzYmI4MWU2MmI2NDY3MGEyZDY2M2IIVF/u: 00:40:44.809 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: ]] 00:40:44.809 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: 00:40:44.809 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:40:44.809 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:44.809 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:40:44.809 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:40:44.809 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:40:44.809 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:44.809 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:40:44.809 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:44.809 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:44.809 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:44.809 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:44.809 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:44.809 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:44.809 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:44.809 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:44.809 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:44.809 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:44.809 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:44.809 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:44.809 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:44.809 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:44.809 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:40:44.809 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:44.809 20:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:44.809 nvme0n1 00:40:44.809 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:44.809 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:44.809 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:44.809 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:44.809 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:45.067 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.067 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:45.067 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:45.067 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.067 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:45.067 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.067 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:45.067 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:40:45.067 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:45.067 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:40:45.067 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:40:45.067 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:40:45.067 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWJiNTg2OGI0NGY3OTAyOGU0MDEzNDRmMDhkYWQ1YjlmNjI2MzFlMjMyZGY1YzAz5Sf68w==: 00:40:45.067 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: 00:40:45.067 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:40:45.067 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWJiNTg2OGI0NGY3OTAyOGU0MDEzNDRmMDhkYWQ1YjlmNjI2MzFlMjMyZGY1YzAz5Sf68w==: 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: ]] 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:45.068 nvme0n1 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTliZDNiNDRjZDY4ZmU5ZDk3NWI4MmI4MDliMjliZjC4rpN1: 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTliZDNiNDRjZDY4ZmU5ZDk3NWI4MmI4MDliMjliZjC4rpN1: 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: ]] 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:45.068 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.326 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:45.326 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:45.326 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:45.326 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:45.326 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:45.326 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:45.326 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:45.326 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:45.326 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:45.326 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:45.326 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:45.326 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:40:45.326 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.326 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:45.326 nvme0n1 00:40:45.326 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.326 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:45.326 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:45.326 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.326 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:45.326 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.326 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:45.326 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:45.326 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.326 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:45.326 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.326 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:45.326 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:40:45.327 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:45.327 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:40:45.327 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:40:45.327 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:40:45.327 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTcyOTA2MGU0MmY1NDJmM2UxMWU5ZjVlYjBjMmQ2MDM2NmZjYzg4MjA2MGZmYTg1K1ND4w==: 00:40:45.327 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: 00:40:45.327 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:40:45.327 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:40:45.327 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTcyOTA2MGU0MmY1NDJmM2UxMWU5ZjVlYjBjMmQ2MDM2NmZjYzg4MjA2MGZmYTg1K1ND4w==: 00:40:45.327 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: ]] 00:40:45.327 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: 00:40:45.327 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:40:45.327 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:45.327 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:40:45.327 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:40:45.327 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:40:45.327 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:45.327 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:40:45.327 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.327 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:45.327 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.327 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:45.327 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:45.327 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:45.327 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:45.327 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:45.327 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:45.327 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:45.327 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:45.327 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:45.327 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:45.327 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:45.327 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:40:45.327 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.327 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:45.584 nvme0n1 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzUxYjMzNjc0N2Y1NmRiYTEwZjZhOWRlNDQ1YjQ0MDZkMmQ0YmU4ZmVjNGFkZmU3ZWM1OWMzMGZjMzI3MzY4Oe4GP/E=: 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzUxYjMzNjc0N2Y1NmRiYTEwZjZhOWRlNDQ1YjQ0MDZkMmQ0YmU4ZmVjNGFkZmU3ZWM1OWMzMGZjMzI3MzY4Oe4GP/E=: 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:45.585 nvme0n1 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:45.585 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTIyNzllMDYyNmYzYmI4MWU2MmI2NDY3MGEyZDY2M2IIVF/u: 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTIyNzllMDYyNmYzYmI4MWU2MmI2NDY3MGEyZDY2M2IIVF/u: 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: ]] 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:45.844 nvme0n1 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:45.844 20:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:45.844 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.844 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:45.844 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.844 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:45.844 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:45.844 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.844 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:45.844 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.844 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:45.844 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:40:45.844 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:45.844 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:40:45.844 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:40:45.844 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:40:45.844 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWJiNTg2OGI0NGY3OTAyOGU0MDEzNDRmMDhkYWQ1YjlmNjI2MzFlMjMyZGY1YzAz5Sf68w==: 00:40:45.844 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: 00:40:45.844 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:40:45.844 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:40:45.844 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWJiNTg2OGI0NGY3OTAyOGU0MDEzNDRmMDhkYWQ1YjlmNjI2MzFlMjMyZGY1YzAz5Sf68w==: 00:40:45.844 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: ]] 00:40:45.844 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: 00:40:45.844 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:40:45.844 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:45.844 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:40:45.844 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:40:45.844 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:40:45.844 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:45.844 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:40:45.844 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.844 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:45.844 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.844 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:45.844 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:45.844 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:45.845 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:45.845 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:45.845 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:45.845 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:45.845 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:45.845 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:45.845 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:45.845 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:45.845 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:40:45.845 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.845 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:46.104 nvme0n1 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTliZDNiNDRjZDY4ZmU5ZDk3NWI4MmI4MDliMjliZjC4rpN1: 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTliZDNiNDRjZDY4ZmU5ZDk3NWI4MmI4MDliMjliZjC4rpN1: 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: ]] 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.104 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:46.364 nvme0n1 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTcyOTA2MGU0MmY1NDJmM2UxMWU5ZjVlYjBjMmQ2MDM2NmZjYzg4MjA2MGZmYTg1K1ND4w==: 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTcyOTA2MGU0MmY1NDJmM2UxMWU5ZjVlYjBjMmQ2MDM2NmZjYzg4MjA2MGZmYTg1K1ND4w==: 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: ]] 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.364 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:46.624 nvme0n1 00:40:46.624 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.624 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:46.624 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:46.624 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.624 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:46.624 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.624 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:46.624 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:46.624 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.624 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:46.624 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.624 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:46.624 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:40:46.624 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:46.624 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:40:46.624 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:40:46.624 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:40:46.624 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzUxYjMzNjc0N2Y1NmRiYTEwZjZhOWRlNDQ1YjQ0MDZkMmQ0YmU4ZmVjNGFkZmU3ZWM1OWMzMGZjMzI3MzY4Oe4GP/E=: 00:40:46.624 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:40:46.624 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:40:46.624 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:40:46.624 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzUxYjMzNjc0N2Y1NmRiYTEwZjZhOWRlNDQ1YjQ0MDZkMmQ0YmU4ZmVjNGFkZmU3ZWM1OWMzMGZjMzI3MzY4Oe4GP/E=: 00:40:46.624 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:40:46.624 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:40:46.624 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:46.624 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:40:46.624 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:40:46.625 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:40:46.625 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:46.625 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:40:46.625 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.625 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:46.625 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.625 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:46.625 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:46.625 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:46.625 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:46.625 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:46.625 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:46.625 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:46.625 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:46.625 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:46.625 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:46.625 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:46.625 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:40:46.625 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.625 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:46.625 nvme0n1 00:40:46.625 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.625 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:46.625 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.625 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:46.625 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:46.625 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.884 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:46.884 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:46.884 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.884 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:46.884 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.884 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:40:46.885 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:46.885 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:40:46.885 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:46.885 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:40:46.885 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:40:46.885 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:40:46.885 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTIyNzllMDYyNmYzYmI4MWU2MmI2NDY3MGEyZDY2M2IIVF/u: 00:40:46.885 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: 00:40:46.885 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:40:46.885 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:40:46.885 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTIyNzllMDYyNmYzYmI4MWU2MmI2NDY3MGEyZDY2M2IIVF/u: 00:40:46.885 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: ]] 00:40:46.885 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: 00:40:46.885 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:40:46.885 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:46.885 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:40:46.885 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:40:46.885 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:40:46.885 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:46.885 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:40:46.885 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.885 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:46.885 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.885 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:46.885 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:46.885 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:46.885 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:46.885 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:46.885 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:46.885 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:46.885 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:46.885 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:46.885 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:46.885 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:46.885 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:40:46.885 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.885 20:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:46.885 nvme0n1 00:40:46.885 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.885 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:46.885 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:46.885 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.885 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:46.885 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:47.145 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:47.145 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:47.145 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:47.145 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:47.145 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:47.145 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:47.145 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:40:47.145 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:47.145 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:40:47.145 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:40:47.145 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:40:47.145 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWJiNTg2OGI0NGY3OTAyOGU0MDEzNDRmMDhkYWQ1YjlmNjI2MzFlMjMyZGY1YzAz5Sf68w==: 00:40:47.145 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: 00:40:47.145 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:40:47.145 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:40:47.145 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWJiNTg2OGI0NGY3OTAyOGU0MDEzNDRmMDhkYWQ1YjlmNjI2MzFlMjMyZGY1YzAz5Sf68w==: 00:40:47.145 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: ]] 00:40:47.145 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: 00:40:47.145 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:40:47.145 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:47.145 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:40:47.146 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:40:47.146 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:40:47.146 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:47.146 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:40:47.146 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:47.146 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:47.146 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:47.146 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:47.146 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:47.146 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:47.146 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:47.146 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:47.146 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:47.146 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:47.146 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:47.146 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:47.146 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:47.146 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:47.146 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:40:47.146 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:47.146 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:47.146 nvme0n1 00:40:47.146 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:47.146 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:47.146 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:47.146 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:47.146 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:47.146 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:47.405 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:47.405 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:47.405 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:47.405 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:47.405 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:47.405 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:47.405 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:40:47.405 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:47.405 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:40:47.405 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:40:47.405 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:40:47.405 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTliZDNiNDRjZDY4ZmU5ZDk3NWI4MmI4MDliMjliZjC4rpN1: 00:40:47.405 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: 00:40:47.405 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:40:47.405 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:40:47.405 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTliZDNiNDRjZDY4ZmU5ZDk3NWI4MmI4MDliMjliZjC4rpN1: 00:40:47.405 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: ]] 00:40:47.405 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: 00:40:47.405 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:40:47.405 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:47.405 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:40:47.405 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:40:47.405 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:40:47.405 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:47.405 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:40:47.405 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:47.406 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:47.406 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:47.406 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:47.406 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:47.406 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:47.406 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:47.406 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:47.406 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:47.406 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:47.406 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:47.406 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:47.406 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:47.406 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:47.406 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:40:47.406 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:47.406 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:47.406 nvme0n1 00:40:47.406 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:47.406 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:47.406 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:47.406 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:47.406 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:47.406 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTcyOTA2MGU0MmY1NDJmM2UxMWU5ZjVlYjBjMmQ2MDM2NmZjYzg4MjA2MGZmYTg1K1ND4w==: 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTcyOTA2MGU0MmY1NDJmM2UxMWU5ZjVlYjBjMmQ2MDM2NmZjYzg4MjA2MGZmYTg1K1ND4w==: 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: ]] 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:47.665 nvme0n1 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:47.665 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:47.925 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:47.925 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:47.925 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:47.925 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:47.925 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:47.925 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:47.925 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:40:47.925 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:47.925 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:40:47.925 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:40:47.925 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:40:47.925 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzUxYjMzNjc0N2Y1NmRiYTEwZjZhOWRlNDQ1YjQ0MDZkMmQ0YmU4ZmVjNGFkZmU3ZWM1OWMzMGZjMzI3MzY4Oe4GP/E=: 00:40:47.925 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:40:47.925 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:40:47.925 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:40:47.925 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzUxYjMzNjc0N2Y1NmRiYTEwZjZhOWRlNDQ1YjQ0MDZkMmQ0YmU4ZmVjNGFkZmU3ZWM1OWMzMGZjMzI3MzY4Oe4GP/E=: 00:40:47.925 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:40:47.925 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:40:47.925 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:47.925 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:40:47.925 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:40:47.925 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:40:47.925 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:47.925 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:40:47.925 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:47.925 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:47.925 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:47.925 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:47.925 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:47.925 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:47.925 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:47.925 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:47.925 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:47.925 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:47.925 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:47.925 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:47.925 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:47.925 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:47.925 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:40:47.925 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:47.925 20:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:47.925 nvme0n1 00:40:47.925 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:47.925 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:47.925 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:47.925 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:47.925 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTIyNzllMDYyNmYzYmI4MWU2MmI2NDY3MGEyZDY2M2IIVF/u: 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTIyNzllMDYyNmYzYmI4MWU2MmI2NDY3MGEyZDY2M2IIVF/u: 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: ]] 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.184 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:48.443 nvme0n1 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWJiNTg2OGI0NGY3OTAyOGU0MDEzNDRmMDhkYWQ1YjlmNjI2MzFlMjMyZGY1YzAz5Sf68w==: 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWJiNTg2OGI0NGY3OTAyOGU0MDEzNDRmMDhkYWQ1YjlmNjI2MzFlMjMyZGY1YzAz5Sf68w==: 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: ]] 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.443 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:49.012 nvme0n1 00:40:49.012 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:49.012 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:49.012 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:49.012 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:49.012 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:49.012 20:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTliZDNiNDRjZDY4ZmU5ZDk3NWI4MmI4MDliMjliZjC4rpN1: 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTliZDNiNDRjZDY4ZmU5ZDk3NWI4MmI4MDliMjliZjC4rpN1: 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: ]] 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:49.012 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:49.272 nvme0n1 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTcyOTA2MGU0MmY1NDJmM2UxMWU5ZjVlYjBjMmQ2MDM2NmZjYzg4MjA2MGZmYTg1K1ND4w==: 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTcyOTA2MGU0MmY1NDJmM2UxMWU5ZjVlYjBjMmQ2MDM2NmZjYzg4MjA2MGZmYTg1K1ND4w==: 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: ]] 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:49.272 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:49.531 nvme0n1 00:40:49.531 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:49.531 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:49.531 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:49.531 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:49.531 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:49.531 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:49.790 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:49.791 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:49.791 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:49.791 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:49.791 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:49.791 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:49.791 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:40:49.791 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:49.791 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:40:49.791 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:40:49.791 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:40:49.791 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzUxYjMzNjc0N2Y1NmRiYTEwZjZhOWRlNDQ1YjQ0MDZkMmQ0YmU4ZmVjNGFkZmU3ZWM1OWMzMGZjMzI3MzY4Oe4GP/E=: 00:40:49.791 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:40:49.791 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:40:49.791 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:40:49.791 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzUxYjMzNjc0N2Y1NmRiYTEwZjZhOWRlNDQ1YjQ0MDZkMmQ0YmU4ZmVjNGFkZmU3ZWM1OWMzMGZjMzI3MzY4Oe4GP/E=: 00:40:49.791 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:40:49.791 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:40:49.791 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:49.791 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:40:49.791 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:40:49.791 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:40:49.791 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:49.791 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:40:49.791 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:49.791 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:49.791 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:49.791 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:49.791 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:49.791 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:49.791 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:49.791 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:49.791 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:49.791 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:49.791 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:49.791 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:49.791 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:49.791 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:49.791 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:40:49.791 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:49.791 20:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:50.050 nvme0n1 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTIyNzllMDYyNmYzYmI4MWU2MmI2NDY3MGEyZDY2M2IIVF/u: 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTIyNzllMDYyNmYzYmI4MWU2MmI2NDY3MGEyZDY2M2IIVF/u: 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: ]] 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:50.050 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:50.619 nvme0n1 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWJiNTg2OGI0NGY3OTAyOGU0MDEzNDRmMDhkYWQ1YjlmNjI2MzFlMjMyZGY1YzAz5Sf68w==: 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWJiNTg2OGI0NGY3OTAyOGU0MDEzNDRmMDhkYWQ1YjlmNjI2MzFlMjMyZGY1YzAz5Sf68w==: 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: ]] 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:50.619 20:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:51.187 nvme0n1 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTliZDNiNDRjZDY4ZmU5ZDk3NWI4MmI4MDliMjliZjC4rpN1: 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTliZDNiNDRjZDY4ZmU5ZDk3NWI4MmI4MDliMjliZjC4rpN1: 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: ]] 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:51.187 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:51.756 nvme0n1 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTcyOTA2MGU0MmY1NDJmM2UxMWU5ZjVlYjBjMmQ2MDM2NmZjYzg4MjA2MGZmYTg1K1ND4w==: 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTcyOTA2MGU0MmY1NDJmM2UxMWU5ZjVlYjBjMmQ2MDM2NmZjYzg4MjA2MGZmYTg1K1ND4w==: 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: ]] 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:51.756 20:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:52.324 nvme0n1 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzUxYjMzNjc0N2Y1NmRiYTEwZjZhOWRlNDQ1YjQ0MDZkMmQ0YmU4ZmVjNGFkZmU3ZWM1OWMzMGZjMzI3MzY4Oe4GP/E=: 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzUxYjMzNjc0N2Y1NmRiYTEwZjZhOWRlNDQ1YjQ0MDZkMmQ0YmU4ZmVjNGFkZmU3ZWM1OWMzMGZjMzI3MzY4Oe4GP/E=: 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:52.324 20:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:52.891 nvme0n1 00:40:52.891 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:52.891 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:52.891 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:52.891 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:52.891 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:52.891 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:52.891 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:52.891 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:52.891 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:52.891 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:52.891 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:52.891 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:40:52.891 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:40:52.891 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:52.891 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:40:52.891 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:52.891 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:40:52.891 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:40:52.891 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:40:52.891 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTIyNzllMDYyNmYzYmI4MWU2MmI2NDY3MGEyZDY2M2IIVF/u: 00:40:52.891 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: 00:40:52.891 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:40:52.891 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:40:52.891 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTIyNzllMDYyNmYzYmI4MWU2MmI2NDY3MGEyZDY2M2IIVF/u: 00:40:52.891 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: ]] 00:40:52.891 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: 00:40:52.891 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:40:52.891 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:52.891 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:40:52.891 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:40:52.891 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:53.150 nvme0n1 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWJiNTg2OGI0NGY3OTAyOGU0MDEzNDRmMDhkYWQ1YjlmNjI2MzFlMjMyZGY1YzAz5Sf68w==: 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWJiNTg2OGI0NGY3OTAyOGU0MDEzNDRmMDhkYWQ1YjlmNjI2MzFlMjMyZGY1YzAz5Sf68w==: 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: ]] 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:53.150 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:53.409 nvme0n1 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTliZDNiNDRjZDY4ZmU5ZDk3NWI4MmI4MDliMjliZjC4rpN1: 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTliZDNiNDRjZDY4ZmU5ZDk3NWI4MmI4MDliMjliZjC4rpN1: 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: ]] 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:53.409 nvme0n1 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:53.409 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:53.668 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:53.668 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:53.668 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:40:53.668 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:53.668 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:40:53.668 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:40:53.668 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:40:53.668 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTcyOTA2MGU0MmY1NDJmM2UxMWU5ZjVlYjBjMmQ2MDM2NmZjYzg4MjA2MGZmYTg1K1ND4w==: 00:40:53.668 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: 00:40:53.668 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:40:53.668 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:40:53.668 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTcyOTA2MGU0MmY1NDJmM2UxMWU5ZjVlYjBjMmQ2MDM2NmZjYzg4MjA2MGZmYTg1K1ND4w==: 00:40:53.668 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: ]] 00:40:53.668 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: 00:40:53.668 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:40:53.668 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:53.668 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:40:53.668 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:40:53.668 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:40:53.668 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:53.668 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:40:53.668 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:53.668 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:53.669 nvme0n1 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzUxYjMzNjc0N2Y1NmRiYTEwZjZhOWRlNDQ1YjQ0MDZkMmQ0YmU4ZmVjNGFkZmU3ZWM1OWMzMGZjMzI3MzY4Oe4GP/E=: 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzUxYjMzNjc0N2Y1NmRiYTEwZjZhOWRlNDQ1YjQ0MDZkMmQ0YmU4ZmVjNGFkZmU3ZWM1OWMzMGZjMzI3MzY4Oe4GP/E=: 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:53.669 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:53.927 nvme0n1 00:40:53.927 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:53.927 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:53.927 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:53.927 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:53.927 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:53.927 20:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTIyNzllMDYyNmYzYmI4MWU2MmI2NDY3MGEyZDY2M2IIVF/u: 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTIyNzllMDYyNmYzYmI4MWU2MmI2NDY3MGEyZDY2M2IIVF/u: 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: ]] 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:53.927 nvme0n1 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:53.927 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWJiNTg2OGI0NGY3OTAyOGU0MDEzNDRmMDhkYWQ1YjlmNjI2MzFlMjMyZGY1YzAz5Sf68w==: 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWJiNTg2OGI0NGY3OTAyOGU0MDEzNDRmMDhkYWQ1YjlmNjI2MzFlMjMyZGY1YzAz5Sf68w==: 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: ]] 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:54.186 nvme0n1 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTliZDNiNDRjZDY4ZmU5ZDk3NWI4MmI4MDliMjliZjC4rpN1: 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:40:54.186 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:40:54.187 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTliZDNiNDRjZDY4ZmU5ZDk3NWI4MmI4MDliMjliZjC4rpN1: 00:40:54.187 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: ]] 00:40:54.187 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: 00:40:54.187 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:40:54.187 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:54.187 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:40:54.187 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:40:54.187 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:40:54.187 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:54.187 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:40:54.187 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.187 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:54.187 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:54.445 nvme0n1 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTcyOTA2MGU0MmY1NDJmM2UxMWU5ZjVlYjBjMmQ2MDM2NmZjYzg4MjA2MGZmYTg1K1ND4w==: 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTcyOTA2MGU0MmY1NDJmM2UxMWU5ZjVlYjBjMmQ2MDM2NmZjYzg4MjA2MGZmYTg1K1ND4w==: 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: ]] 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.445 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:54.446 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.446 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:54.446 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:54.446 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:54.446 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:54.446 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:54.446 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:54.446 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:54.446 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:54.446 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:54.446 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:54.446 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:54.446 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:40:54.446 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.446 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:54.704 nvme0n1 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzUxYjMzNjc0N2Y1NmRiYTEwZjZhOWRlNDQ1YjQ0MDZkMmQ0YmU4ZmVjNGFkZmU3ZWM1OWMzMGZjMzI3MzY4Oe4GP/E=: 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzUxYjMzNjc0N2Y1NmRiYTEwZjZhOWRlNDQ1YjQ0MDZkMmQ0YmU4ZmVjNGFkZmU3ZWM1OWMzMGZjMzI3MzY4Oe4GP/E=: 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.704 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:54.989 nvme0n1 00:40:54.989 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.989 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:54.989 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.989 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:54.989 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:54.989 20:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.989 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:54.989 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:54.989 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.989 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:54.989 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.989 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:40:54.989 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:54.989 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:40:54.989 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:54.989 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:40:54.989 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:40:54.989 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:40:54.989 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTIyNzllMDYyNmYzYmI4MWU2MmI2NDY3MGEyZDY2M2IIVF/u: 00:40:54.989 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: 00:40:54.989 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:40:54.989 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:40:54.989 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTIyNzllMDYyNmYzYmI4MWU2MmI2NDY3MGEyZDY2M2IIVF/u: 00:40:54.989 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: ]] 00:40:54.989 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: 00:40:54.989 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:40:54.989 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:54.989 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:40:54.989 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:40:54.989 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:40:54.989 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:54.989 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:40:54.989 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.989 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:54.989 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.989 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:54.989 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:54.989 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:54.989 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:54.989 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:54.989 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:54.989 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:54.989 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:54.989 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:54.990 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:54.990 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:54.990 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:40:54.990 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.990 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:54.990 nvme0n1 00:40:54.990 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.990 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:54.990 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:54.990 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.990 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWJiNTg2OGI0NGY3OTAyOGU0MDEzNDRmMDhkYWQ1YjlmNjI2MzFlMjMyZGY1YzAz5Sf68w==: 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWJiNTg2OGI0NGY3OTAyOGU0MDEzNDRmMDhkYWQ1YjlmNjI2MzFlMjMyZGY1YzAz5Sf68w==: 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: ]] 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:55.250 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:55.251 nvme0n1 00:40:55.251 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:55.251 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:55.251 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:55.251 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:55.251 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTliZDNiNDRjZDY4ZmU5ZDk3NWI4MmI4MDliMjliZjC4rpN1: 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTliZDNiNDRjZDY4ZmU5ZDk3NWI4MmI4MDliMjliZjC4rpN1: 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: ]] 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:55.530 nvme0n1 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:55.530 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTcyOTA2MGU0MmY1NDJmM2UxMWU5ZjVlYjBjMmQ2MDM2NmZjYzg4MjA2MGZmYTg1K1ND4w==: 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTcyOTA2MGU0MmY1NDJmM2UxMWU5ZjVlYjBjMmQ2MDM2NmZjYzg4MjA2MGZmYTg1K1ND4w==: 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: ]] 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:40:55.798 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:55.799 20:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:55.799 nvme0n1 00:40:55.799 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:55.799 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:55.799 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:55.799 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:55.799 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:55.799 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzUxYjMzNjc0N2Y1NmRiYTEwZjZhOWRlNDQ1YjQ0MDZkMmQ0YmU4ZmVjNGFkZmU3ZWM1OWMzMGZjMzI3MzY4Oe4GP/E=: 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzUxYjMzNjc0N2Y1NmRiYTEwZjZhOWRlNDQ1YjQ0MDZkMmQ0YmU4ZmVjNGFkZmU3ZWM1OWMzMGZjMzI3MzY4Oe4GP/E=: 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:56.058 nvme0n1 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:56.058 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTIyNzllMDYyNmYzYmI4MWU2MmI2NDY3MGEyZDY2M2IIVF/u: 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTIyNzllMDYyNmYzYmI4MWU2MmI2NDY3MGEyZDY2M2IIVF/u: 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: ]] 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:56.317 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:56.576 nvme0n1 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWJiNTg2OGI0NGY3OTAyOGU0MDEzNDRmMDhkYWQ1YjlmNjI2MzFlMjMyZGY1YzAz5Sf68w==: 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWJiNTg2OGI0NGY3OTAyOGU0MDEzNDRmMDhkYWQ1YjlmNjI2MzFlMjMyZGY1YzAz5Sf68w==: 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: ]] 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:56.576 20:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:56.834 nvme0n1 00:40:56.834 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:56.834 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:56.834 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:56.834 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:56.834 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:56.834 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:57.093 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:57.093 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:57.093 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:57.093 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:57.093 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:57.093 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:57.093 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:40:57.093 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:57.093 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:40:57.093 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:40:57.093 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:40:57.093 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTliZDNiNDRjZDY4ZmU5ZDk3NWI4MmI4MDliMjliZjC4rpN1: 00:40:57.093 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: 00:40:57.093 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:40:57.093 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:40:57.093 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTliZDNiNDRjZDY4ZmU5ZDk3NWI4MmI4MDliMjliZjC4rpN1: 00:40:57.093 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: ]] 00:40:57.093 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: 00:40:57.093 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:40:57.093 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:57.093 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:40:57.093 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:40:57.093 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:40:57.093 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:57.093 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:40:57.093 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:57.093 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:57.093 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:57.093 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:57.093 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:57.094 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:57.094 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:57.094 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:57.094 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:57.094 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:57.094 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:57.094 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:57.094 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:57.094 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:57.094 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:40:57.094 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:57.094 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:57.352 nvme0n1 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTcyOTA2MGU0MmY1NDJmM2UxMWU5ZjVlYjBjMmQ2MDM2NmZjYzg4MjA2MGZmYTg1K1ND4w==: 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTcyOTA2MGU0MmY1NDJmM2UxMWU5ZjVlYjBjMmQ2MDM2NmZjYzg4MjA2MGZmYTg1K1ND4w==: 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: ]] 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:57.352 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:57.611 nvme0n1 00:40:57.611 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:57.611 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:57.611 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:57.611 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:57.611 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:57.611 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:57.870 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:57.870 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:57.870 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:57.870 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:57.870 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:57.870 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:57.870 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:40:57.870 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:57.870 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:40:57.870 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:40:57.870 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:40:57.870 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzUxYjMzNjc0N2Y1NmRiYTEwZjZhOWRlNDQ1YjQ0MDZkMmQ0YmU4ZmVjNGFkZmU3ZWM1OWMzMGZjMzI3MzY4Oe4GP/E=: 00:40:57.870 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:40:57.870 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:40:57.870 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:40:57.870 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzUxYjMzNjc0N2Y1NmRiYTEwZjZhOWRlNDQ1YjQ0MDZkMmQ0YmU4ZmVjNGFkZmU3ZWM1OWMzMGZjMzI3MzY4Oe4GP/E=: 00:40:57.870 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:40:57.870 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:40:57.870 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:57.870 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:40:57.870 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:40:57.870 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:40:57.870 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:57.870 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:40:57.870 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:57.870 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:57.870 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:57.870 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:57.870 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:57.870 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:57.870 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:57.870 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:57.870 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:57.870 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:57.870 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:57.870 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:57.870 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:57.870 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:57.870 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:40:57.870 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:57.870 20:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:58.129 nvme0n1 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTIyNzllMDYyNmYzYmI4MWU2MmI2NDY3MGEyZDY2M2IIVF/u: 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTIyNzllMDYyNmYzYmI4MWU2MmI2NDY3MGEyZDY2M2IIVF/u: 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: ]] 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGI3NDJiYWVjNGY2N2E1YTJmYjIzZTg3NjE3MDllZjU5Y2U3NjgzNTBhN2ZjY2EwYzY1YjAzNDlkYWVhZTBjOLZri8o=: 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:58.129 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:58.696 nvme0n1 00:40:58.696 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:58.696 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:58.696 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:58.696 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:58.696 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:58.696 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:58.696 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:58.696 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:58.696 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:58.696 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:58.696 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:58.696 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:58.696 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:40:58.696 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:58.696 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:40:58.696 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:40:58.696 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:40:58.696 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWJiNTg2OGI0NGY3OTAyOGU0MDEzNDRmMDhkYWQ1YjlmNjI2MzFlMjMyZGY1YzAz5Sf68w==: 00:40:58.697 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: 00:40:58.697 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:40:58.697 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:40:58.697 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWJiNTg2OGI0NGY3OTAyOGU0MDEzNDRmMDhkYWQ1YjlmNjI2MzFlMjMyZGY1YzAz5Sf68w==: 00:40:58.697 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: ]] 00:40:58.697 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: 00:40:58.697 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:40:58.697 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:58.697 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:40:58.697 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:40:58.697 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:40:58.697 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:58.697 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:40:58.697 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:58.697 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:58.697 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:58.697 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:58.697 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:58.697 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:58.697 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:58.697 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:58.697 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:58.697 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:58.697 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:58.697 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:58.697 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:58.697 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:58.697 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:40:58.697 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:58.697 20:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:59.264 nvme0n1 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTliZDNiNDRjZDY4ZmU5ZDk3NWI4MmI4MDliMjliZjC4rpN1: 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTliZDNiNDRjZDY4ZmU5ZDk3NWI4MmI4MDliMjliZjC4rpN1: 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: ]] 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:59.264 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:59.831 nvme0n1 00:40:59.831 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:59.831 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:59.831 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:59.831 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:59.831 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:59.831 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:59.831 20:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:59.831 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:59.831 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:59.831 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:59.831 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:59.831 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:59.831 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:40:59.831 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:59.831 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:40:59.831 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:40:59.831 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:40:59.831 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTcyOTA2MGU0MmY1NDJmM2UxMWU5ZjVlYjBjMmQ2MDM2NmZjYzg4MjA2MGZmYTg1K1ND4w==: 00:40:59.831 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: 00:40:59.831 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:40:59.831 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:40:59.831 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTcyOTA2MGU0MmY1NDJmM2UxMWU5ZjVlYjBjMmQ2MDM2NmZjYzg4MjA2MGZmYTg1K1ND4w==: 00:40:59.831 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: ]] 00:40:59.831 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjA3M2NjZTFhOGVjNTA4MGE5NDFjZTc2NjA5NDI5NjCcBZZE: 00:40:59.831 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:40:59.831 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:59.831 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:40:59.831 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:40:59.832 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:40:59.832 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:59.832 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:40:59.832 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:59.832 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:59.832 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:59.832 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:59.832 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:40:59.832 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:40:59.832 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:40:59.832 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:59.832 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:59.832 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:40:59.832 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:59.832 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:40:59.832 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:40:59.832 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:40:59.832 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:40:59.832 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:59.832 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:00.399 nvme0n1 00:41:00.399 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:00.399 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:00.399 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:00.399 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:00.399 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:00.399 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:00.399 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:00.399 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:00.399 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:00.399 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:00.399 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:00.399 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:00.399 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:41:00.399 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:00.399 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:00.399 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:00.399 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:00.399 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzUxYjMzNjc0N2Y1NmRiYTEwZjZhOWRlNDQ1YjQ0MDZkMmQ0YmU4ZmVjNGFkZmU3ZWM1OWMzMGZjMzI3MzY4Oe4GP/E=: 00:41:00.399 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:00.399 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:00.399 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:00.399 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzUxYjMzNjc0N2Y1NmRiYTEwZjZhOWRlNDQ1YjQ0MDZkMmQ0YmU4ZmVjNGFkZmU3ZWM1OWMzMGZjMzI3MzY4Oe4GP/E=: 00:41:00.399 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:00.399 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:41:00.399 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:00.399 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:00.399 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:00.399 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:00.399 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:00.399 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:41:00.399 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:00.399 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:00.399 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:00.399 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:00.399 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:41:00.399 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:41:00.399 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:41:00.399 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:00.400 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:00.400 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:41:00.400 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:00.400 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:41:00.400 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:41:00.400 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:41:00.400 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:00.400 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:00.400 20:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:00.968 nvme0n1 00:41:00.968 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:00.968 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:00.968 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:00.969 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:00.969 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:00.969 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:00.969 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:00.969 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:00.969 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:00.969 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:00.969 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:00.969 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:41:00.969 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:00.969 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:00.969 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:00.969 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:00.969 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWJiNTg2OGI0NGY3OTAyOGU0MDEzNDRmMDhkYWQ1YjlmNjI2MzFlMjMyZGY1YzAz5Sf68w==: 00:41:00.969 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: 00:41:00.969 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:00.969 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:00.969 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWJiNTg2OGI0NGY3OTAyOGU0MDEzNDRmMDhkYWQ1YjlmNjI2MzFlMjMyZGY1YzAz5Sf68w==: 00:41:00.969 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: ]] 00:41:00.969 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: 00:41:00.969 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:41:00.969 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:00.969 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:00.969 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:00.969 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:41:00.969 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:41:00.969 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:41:00.969 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:41:00.969 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:00.969 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:00.969 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:41:00.969 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:00.969 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:41:00.969 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:41:00.969 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:41:00.969 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:41:00.969 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:41:00.969 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:41:00.969 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:41:01.228 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:01.228 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:41:01.228 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:01.228 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:41:01.228 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:01.228 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:01.228 2024/10/01 20:39:56 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:41:01.228 request: 00:41:01.228 { 00:41:01.228 "method": "bdev_nvme_attach_controller", 00:41:01.228 "params": { 00:41:01.228 "name": "nvme0", 00:41:01.228 "trtype": "tcp", 00:41:01.228 "traddr": "10.0.0.1", 00:41:01.228 "adrfam": "ipv4", 00:41:01.228 "trsvcid": "4420", 00:41:01.228 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:41:01.228 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:41:01.228 "prchk_reftag": false, 00:41:01.228 "prchk_guard": false, 00:41:01.228 "hdgst": false, 00:41:01.228 "ddgst": false, 00:41:01.228 "allow_unrecognized_csi": false 00:41:01.228 } 00:41:01.228 } 00:41:01.228 Got JSON-RPC error response 00:41:01.228 GoRPCClient: error on JSON-RPC call 00:41:01.228 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:41:01.228 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:01.229 2024/10/01 20:39:56 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:41:01.229 request: 00:41:01.229 { 00:41:01.229 "method": "bdev_nvme_attach_controller", 00:41:01.229 "params": { 00:41:01.229 "name": "nvme0", 00:41:01.229 "trtype": "tcp", 00:41:01.229 "traddr": "10.0.0.1", 00:41:01.229 "adrfam": "ipv4", 00:41:01.229 "trsvcid": "4420", 00:41:01.229 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:41:01.229 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:41:01.229 "prchk_reftag": false, 00:41:01.229 "prchk_guard": false, 00:41:01.229 "hdgst": false, 00:41:01.229 "ddgst": false, 00:41:01.229 "dhchap_key": "key2", 00:41:01.229 "allow_unrecognized_csi": false 00:41:01.229 } 00:41:01.229 } 00:41:01.229 Got JSON-RPC error response 00:41:01.229 GoRPCClient: error on JSON-RPC call 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:01.229 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:01.229 2024/10/01 20:39:56 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:41:01.229 request: 00:41:01.229 { 00:41:01.229 "method": "bdev_nvme_attach_controller", 00:41:01.229 "params": { 00:41:01.229 "name": "nvme0", 00:41:01.229 "trtype": "tcp", 00:41:01.229 "traddr": "10.0.0.1", 00:41:01.229 "adrfam": "ipv4", 00:41:01.229 "trsvcid": "4420", 00:41:01.229 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:41:01.229 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:41:01.229 "prchk_reftag": false, 00:41:01.230 "prchk_guard": false, 00:41:01.230 "hdgst": false, 00:41:01.230 "ddgst": false, 00:41:01.230 "dhchap_key": "key1", 00:41:01.230 "dhchap_ctrlr_key": "ckey2", 00:41:01.230 "allow_unrecognized_csi": false 00:41:01.230 } 00:41:01.230 } 00:41:01.230 Got JSON-RPC error response 00:41:01.230 GoRPCClient: error on JSON-RPC call 00:41:01.230 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:41:01.230 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:41:01.230 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:41:01.230 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:41:01.230 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:41:01.230 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:41:01.230 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:41:01.230 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:41:01.230 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:41:01.230 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:01.230 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:01.230 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:41:01.230 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:01.230 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:41:01.230 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:41:01.230 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:41:01.230 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:41:01.230 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:01.230 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:01.489 nvme0n1 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTliZDNiNDRjZDY4ZmU5ZDk3NWI4MmI4MDliMjliZjC4rpN1: 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTliZDNiNDRjZDY4ZmU5ZDk3NWI4MmI4MDliMjliZjC4rpN1: 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: ]] 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:01.489 2024/10/01 20:39:56 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey2 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-5 Msg=Input/output error 00:41:01.489 request: 00:41:01.489 { 00:41:01.489 "method": "bdev_nvme_set_keys", 00:41:01.489 "params": { 00:41:01.489 "name": "nvme0", 00:41:01.489 "dhchap_key": "key1", 00:41:01.489 "dhchap_ctrlr_key": "ckey2" 00:41:01.489 } 00:41:01.489 } 00:41:01.489 Got JSON-RPC error response 00:41:01.489 GoRPCClient: error on JSON-RPC call 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:41:01.489 20:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:41:02.867 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:41:02.867 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:41:02.867 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:02.867 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:02.867 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:02.867 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWJiNTg2OGI0NGY3OTAyOGU0MDEzNDRmMDhkYWQ1YjlmNjI2MzFlMjMyZGY1YzAz5Sf68w==: 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWJiNTg2OGI0NGY3OTAyOGU0MDEzNDRmMDhkYWQ1YjlmNjI2MzFlMjMyZGY1YzAz5Sf68w==: 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: ]] 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDY3ZTRiM2FjMmY2NzZlOWQ2ZmU2NzUzNzFjMjZhZjY4NmI5MzA1MTIwNTY3OTll2u4klw==: 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:02.868 nvme0n1 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTliZDNiNDRjZDY4ZmU5ZDk3NWI4MmI4MDliMjliZjC4rpN1: 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTliZDNiNDRjZDY4ZmU5ZDk3NWI4MmI4MDliMjliZjC4rpN1: 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: ]] 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTFiN2E0NmJlM2M3MDVlYzgyNWQ5NmQ5MTI1YzlmYTUAXdwQ: 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:02.868 2024/10/01 20:39:57 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey1 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:41:02.868 request: 00:41:02.868 { 00:41:02.868 "method": "bdev_nvme_set_keys", 00:41:02.868 "params": { 00:41:02.868 "name": "nvme0", 00:41:02.868 "dhchap_key": "key2", 00:41:02.868 "dhchap_ctrlr_key": "ckey1" 00:41:02.868 } 00:41:02.868 } 00:41:02.868 Got JSON-RPC error response 00:41:02.868 GoRPCClient: error on JSON-RPC call 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:41:02.868 20:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:41:03.804 20:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:41:03.804 20:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:03.804 20:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:03.804 20:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:41:03.804 20:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:03.804 20:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:41:03.804 20:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:41:03.804 20:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:41:03.804 20:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:41:03.804 20:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:41:03.804 20:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:41:03.804 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:03.804 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:41:03.804 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:03.804 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:03.804 rmmod nvme_tcp 00:41:03.804 rmmod nvme_fabrics 00:41:03.804 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:03.804 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:41:03.804 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:41:03.804 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 91706 ']' 00:41:03.804 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 91706 00:41:03.804 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 91706 ']' 00:41:03.804 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 91706 00:41:03.804 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:41:03.804 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:03.804 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91706 00:41:04.062 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:04.062 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:04.062 killing process with pid 91706 00:41:04.062 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91706' 00:41:04.062 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 91706 00:41:04.062 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 91706 00:41:04.062 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:41:04.062 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:41:04.062 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:41:04.062 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:41:04.062 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-save 00:41:04.062 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:41:04.062 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-restore 00:41:04.062 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:04.062 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:41:04.062 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:41:04.062 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:41:04.062 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:41:04.062 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:41:04.062 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:41:04.062 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:41:04.321 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:41:04.321 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:41:04.321 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:41:04.321 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:41:04.321 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:41:04.321 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:41:04.321 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:41:04.321 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:41:04.321 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:04.321 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:04.321 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:04.321 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:41:04.321 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:41:04.321 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:41:04.321 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:41:04.321 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:41:04.321 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:41:04.321 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:41:04.321 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:41:04.321 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:41:04.321 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:41:04.322 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:41:04.322 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:41:04.322 20:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:41:05.258 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:41:05.258 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:41:05.258 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:41:05.258 20:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.gGg /tmp/spdk.key-null.SKU /tmp/spdk.key-sha256.toZ /tmp/spdk.key-sha384.SVV /tmp/spdk.key-sha512.VKW /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:41:05.258 20:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:41:05.517 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:41:05.803 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:41:05.803 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:41:05.803 00:41:05.803 real 0m35.468s 00:41:05.803 user 0m32.739s 00:41:05.803 sys 0m4.165s 00:41:05.803 20:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:05.803 ************************************ 00:41:05.803 END TEST nvmf_auth_host 00:41:05.803 ************************************ 00:41:05.804 20:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:05.804 20:40:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:41:05.804 20:40:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:41:05.804 20:40:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:41:05.804 20:40:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:05.804 20:40:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:41:05.804 ************************************ 00:41:05.804 START TEST nvmf_digest 00:41:05.804 ************************************ 00:41:05.804 20:40:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:41:05.804 * Looking for test storage... 00:41:05.804 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:41:05.804 20:40:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:05.804 20:40:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lcov --version 00:41:05.804 20:40:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:06.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:06.063 --rc genhtml_branch_coverage=1 00:41:06.063 --rc genhtml_function_coverage=1 00:41:06.063 --rc genhtml_legend=1 00:41:06.063 --rc geninfo_all_blocks=1 00:41:06.063 --rc geninfo_unexecuted_blocks=1 00:41:06.063 00:41:06.063 ' 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:06.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:06.063 --rc genhtml_branch_coverage=1 00:41:06.063 --rc genhtml_function_coverage=1 00:41:06.063 --rc genhtml_legend=1 00:41:06.063 --rc geninfo_all_blocks=1 00:41:06.063 --rc geninfo_unexecuted_blocks=1 00:41:06.063 00:41:06.063 ' 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:06.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:06.063 --rc genhtml_branch_coverage=1 00:41:06.063 --rc genhtml_function_coverage=1 00:41:06.063 --rc genhtml_legend=1 00:41:06.063 --rc geninfo_all_blocks=1 00:41:06.063 --rc geninfo_unexecuted_blocks=1 00:41:06.063 00:41:06.063 ' 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:06.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:06.063 --rc genhtml_branch_coverage=1 00:41:06.063 --rc genhtml_function_coverage=1 00:41:06.063 --rc genhtml_legend=1 00:41:06.063 --rc geninfo_all_blocks=1 00:41:06.063 --rc geninfo_unexecuted_blocks=1 00:41:06.063 00:41:06.063 ' 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:06.063 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:06.064 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # prepare_net_devs 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # local -g is_hw=no 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # remove_spdk_ns 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@458 -- # nvmf_veth_init 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:41:06.064 Cannot find device "nvmf_init_br" 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:41:06.064 Cannot find device "nvmf_init_br2" 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:41:06.064 Cannot find device "nvmf_tgt_br" 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:41:06.064 Cannot find device "nvmf_tgt_br2" 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:41:06.064 Cannot find device "nvmf_init_br" 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:41:06.064 Cannot find device "nvmf_init_br2" 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:41:06.064 Cannot find device "nvmf_tgt_br" 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:41:06.064 Cannot find device "nvmf_tgt_br2" 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:41:06.064 Cannot find device "nvmf_br" 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:41:06.064 Cannot find device "nvmf_init_if" 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:41:06.064 Cannot find device "nvmf_init_if2" 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:41:06.064 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:41:06.064 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:41:06.064 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:41:06.065 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:41:06.065 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:41:06.065 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:41:06.065 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:41:06.065 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:41:06.065 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:41:06.065 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:41:06.065 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:41:06.323 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:41:06.323 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:41:06.323 00:41:06.323 --- 10.0.0.3 ping statistics --- 00:41:06.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:06.323 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:41:06.323 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:41:06.323 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:41:06.323 00:41:06.323 --- 10.0.0.4 ping statistics --- 00:41:06.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:06.323 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:41:06.323 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:06.323 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:41:06.323 00:41:06.323 --- 10.0.0.1 ping statistics --- 00:41:06.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:06.323 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:41:06.323 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:06.323 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:41:06.323 00:41:06.323 --- 10.0.0.2 ping statistics --- 00:41:06.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:06.323 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # return 0 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:41:06.323 ************************************ 00:41:06.323 START TEST nvmf_digest_clean 00:41:06.323 ************************************ 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # nvmfpid=93497 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # waitforlisten 93497 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 93497 ']' 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:06.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:06.323 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:41:06.581 [2024-10-01 20:40:01.581846] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:41:06.581 [2024-10-01 20:40:01.581949] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:06.581 [2024-10-01 20:40:01.725344] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:06.581 [2024-10-01 20:40:01.795235] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:06.581 [2024-10-01 20:40:01.795295] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:06.581 [2024-10-01 20:40:01.795319] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:06.581 [2024-10-01 20:40:01.795329] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:06.581 [2024-10-01 20:40:01.795338] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:06.581 [2024-10-01 20:40:01.795370] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:41:06.581 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:06.581 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:41:06.581 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:41:06.581 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:06.581 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:41:06.840 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:06.840 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:41:06.840 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:41:06.840 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:41:06.840 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.840 20:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:41:06.840 null0 00:41:06.840 [2024-10-01 20:40:02.011419] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:06.840 [2024-10-01 20:40:02.035573] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:41:06.840 20:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.840 20:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:41:06.840 20:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:41:06.840 20:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:41:06.840 20:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:41:06.840 20:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:41:06.840 20:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:41:06.840 20:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:41:06.840 20:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93529 00:41:06.840 20:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:41:06.840 20:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93529 /var/tmp/bperf.sock 00:41:06.840 20:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 93529 ']' 00:41:06.840 20:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:06.840 20:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:06.840 20:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:06.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:06.840 20:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:06.840 20:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:41:07.098 [2024-10-01 20:40:02.102799] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:41:07.098 [2024-10-01 20:40:02.102885] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93529 ] 00:41:07.098 [2024-10-01 20:40:02.237866] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:07.098 [2024-10-01 20:40:02.290039] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:41:07.098 20:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:07.098 20:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:41:07.098 20:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:41:07.098 20:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:41:07.098 20:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:41:07.665 20:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:41:07.665 20:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:41:07.923 nvme0n1 00:41:07.923 20:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:41:07.923 20:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:41:07.923 Running I/O for 2 seconds... 00:41:10.235 21562.00 IOPS, 84.23 MiB/s 21797.50 IOPS, 85.15 MiB/s 00:41:10.235 Latency(us) 00:41:10.235 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:10.235 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:41:10.236 nvme0n1 : 2.00 21811.50 85.20 0.00 0.00 5861.40 3008.70 19779.96 00:41:10.236 =================================================================================================================== 00:41:10.236 Total : 21811.50 85.20 0.00 0.00 5861.40 3008.70 19779.96 00:41:10.236 { 00:41:10.236 "results": [ 00:41:10.236 { 00:41:10.236 "job": "nvme0n1", 00:41:10.236 "core_mask": "0x2", 00:41:10.236 "workload": "randread", 00:41:10.236 "status": "finished", 00:41:10.236 "queue_depth": 128, 00:41:10.236 "io_size": 4096, 00:41:10.236 "runtime": 2.003805, 00:41:10.236 "iops": 21811.503614373654, 00:41:10.236 "mibps": 85.20118599364709, 00:41:10.236 "io_failed": 0, 00:41:10.236 "io_timeout": 0, 00:41:10.236 "avg_latency_us": 5861.396055461492, 00:41:10.236 "min_latency_us": 3008.6981818181816, 00:41:10.236 "max_latency_us": 19779.956363636364 00:41:10.236 } 00:41:10.236 ], 00:41:10.236 "core_count": 1 00:41:10.236 } 00:41:10.236 20:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:41:10.236 20:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:41:10.236 20:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:41:10.236 20:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:41:10.236 | select(.opcode=="crc32c") 00:41:10.236 | "\(.module_name) \(.executed)"' 00:41:10.236 20:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:41:10.236 20:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:41:10.236 20:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:41:10.236 20:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:41:10.236 20:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:41:10.236 20:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93529 00:41:10.236 20:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 93529 ']' 00:41:10.236 20:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 93529 00:41:10.236 20:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:41:10.236 20:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:10.236 20:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93529 00:41:10.495 killing process with pid 93529 00:41:10.495 Received shutdown signal, test time was about 2.000000 seconds 00:41:10.495 00:41:10.495 Latency(us) 00:41:10.495 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:10.495 =================================================================================================================== 00:41:10.495 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:10.496 20:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:41:10.496 20:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:41:10.496 20:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93529' 00:41:10.496 20:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 93529 00:41:10.496 20:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 93529 00:41:10.496 20:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:41:10.496 20:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:41:10.496 20:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:41:10.496 20:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:41:10.496 20:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:41:10.496 20:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:41:10.496 20:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:41:10.496 20:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93606 00:41:10.496 20:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:41:10.496 20:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93606 /var/tmp/bperf.sock 00:41:10.496 20:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 93606 ']' 00:41:10.496 20:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:10.496 20:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:10.496 20:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:10.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:10.496 20:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:10.496 20:40:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:41:10.755 I/O size of 131072 is greater than zero copy threshold (65536). 00:41:10.755 Zero copy mechanism will not be used. 00:41:10.755 [2024-10-01 20:40:05.758203] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:41:10.755 [2024-10-01 20:40:05.758304] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93606 ] 00:41:10.755 [2024-10-01 20:40:05.890614] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:10.755 [2024-10-01 20:40:05.944267] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:41:11.693 20:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:11.693 20:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:41:11.693 20:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:41:11.693 20:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:41:11.693 20:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:41:11.952 20:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:41:11.952 20:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:41:12.211 nvme0n1 00:41:12.211 20:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:41:12.211 20:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:41:12.471 I/O size of 131072 is greater than zero copy threshold (65536). 00:41:12.471 Zero copy mechanism will not be used. 00:41:12.471 Running I/O for 2 seconds... 00:41:14.342 9543.00 IOPS, 1192.88 MiB/s 9443.50 IOPS, 1180.44 MiB/s 00:41:14.342 Latency(us) 00:41:14.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:14.342 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:41:14.342 nvme0n1 : 2.00 9438.96 1179.87 0.00 0.00 1691.87 536.20 3440.64 00:41:14.343 =================================================================================================================== 00:41:14.343 Total : 9438.96 1179.87 0.00 0.00 1691.87 536.20 3440.64 00:41:14.343 { 00:41:14.343 "results": [ 00:41:14.343 { 00:41:14.343 "job": "nvme0n1", 00:41:14.343 "core_mask": "0x2", 00:41:14.343 "workload": "randread", 00:41:14.343 "status": "finished", 00:41:14.343 "queue_depth": 16, 00:41:14.343 "io_size": 131072, 00:41:14.343 "runtime": 2.002975, 00:41:14.343 "iops": 9438.959547672837, 00:41:14.343 "mibps": 1179.8699434591047, 00:41:14.343 "io_failed": 0, 00:41:14.343 "io_timeout": 0, 00:41:14.343 "avg_latency_us": 1691.8728638335112, 00:41:14.343 "min_latency_us": 536.2036363636364, 00:41:14.343 "max_latency_us": 3440.64 00:41:14.343 } 00:41:14.343 ], 00:41:14.343 "core_count": 1 00:41:14.343 } 00:41:14.343 20:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:41:14.343 20:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:41:14.343 20:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:41:14.343 20:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:41:14.343 20:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:41:14.343 | select(.opcode=="crc32c") 00:41:14.343 | "\(.module_name) \(.executed)"' 00:41:14.602 20:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:41:14.602 20:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:41:14.602 20:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:41:14.602 20:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:41:14.602 20:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93606 00:41:14.602 20:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 93606 ']' 00:41:14.602 20:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 93606 00:41:14.602 20:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:41:14.602 20:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:14.602 20:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93606 00:41:14.602 killing process with pid 93606 00:41:14.602 Received shutdown signal, test time was about 2.000000 seconds 00:41:14.602 00:41:14.602 Latency(us) 00:41:14.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:14.602 =================================================================================================================== 00:41:14.602 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:14.602 20:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:41:14.602 20:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:41:14.602 20:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93606' 00:41:14.602 20:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 93606 00:41:14.602 20:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 93606 00:41:14.860 20:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:41:14.860 20:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:41:14.860 20:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:41:14.860 20:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:41:14.860 20:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:41:14.860 20:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:41:14.860 20:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:41:14.860 20:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:41:14.860 20:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93695 00:41:14.860 20:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93695 /var/tmp/bperf.sock 00:41:14.860 20:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 93695 ']' 00:41:14.860 20:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:14.860 20:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:14.860 20:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:14.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:14.860 20:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:14.860 20:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:41:14.860 [2024-10-01 20:40:10.043619] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:41:14.861 [2024-10-01 20:40:10.043712] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93695 ] 00:41:15.119 [2024-10-01 20:40:10.174666] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:15.119 [2024-10-01 20:40:10.227536] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:41:16.054 20:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:16.054 20:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:41:16.054 20:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:41:16.054 20:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:41:16.054 20:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:41:16.313 20:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:41:16.313 20:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:41:16.572 nvme0n1 00:41:16.572 20:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:41:16.572 20:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:41:16.831 Running I/O for 2 seconds... 00:41:18.752 26213.00 IOPS, 102.39 MiB/s 26266.00 IOPS, 102.60 MiB/s 00:41:18.752 Latency(us) 00:41:18.752 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:18.752 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:18.752 nvme0n1 : 2.00 26287.34 102.68 0.00 0.00 4864.24 2457.60 13583.83 00:41:18.752 =================================================================================================================== 00:41:18.752 Total : 26287.34 102.68 0.00 0.00 4864.24 2457.60 13583.83 00:41:18.752 { 00:41:18.752 "results": [ 00:41:18.752 { 00:41:18.752 "job": "nvme0n1", 00:41:18.752 "core_mask": "0x2", 00:41:18.752 "workload": "randwrite", 00:41:18.752 "status": "finished", 00:41:18.752 "queue_depth": 128, 00:41:18.752 "io_size": 4096, 00:41:18.752 "runtime": 2.003246, 00:41:18.752 "iops": 26287.33565423318, 00:41:18.752 "mibps": 102.68490489934835, 00:41:18.752 "io_failed": 0, 00:41:18.752 "io_timeout": 0, 00:41:18.752 "avg_latency_us": 4864.241902012913, 00:41:18.752 "min_latency_us": 2457.6, 00:41:18.752 "max_latency_us": 13583.825454545455 00:41:18.752 } 00:41:18.752 ], 00:41:18.752 "core_count": 1 00:41:18.752 } 00:41:18.752 20:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:41:18.752 20:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:41:18.752 20:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:41:18.752 20:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:41:18.752 | select(.opcode=="crc32c") 00:41:18.752 | "\(.module_name) \(.executed)"' 00:41:18.752 20:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:41:19.028 20:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:41:19.028 20:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:41:19.028 20:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:41:19.028 20:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:41:19.028 20:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93695 00:41:19.028 20:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 93695 ']' 00:41:19.028 20:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 93695 00:41:19.028 20:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:41:19.028 20:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:19.028 20:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93695 00:41:19.028 20:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:41:19.028 20:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:41:19.028 20:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93695' 00:41:19.028 killing process with pid 93695 00:41:19.028 Received shutdown signal, test time was about 2.000000 seconds 00:41:19.028 00:41:19.028 Latency(us) 00:41:19.028 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:19.028 =================================================================================================================== 00:41:19.028 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:19.028 20:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 93695 00:41:19.028 20:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 93695 00:41:19.285 20:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:41:19.285 20:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:41:19.285 20:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:41:19.285 20:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:41:19.285 20:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:41:19.285 20:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:41:19.285 20:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:41:19.285 20:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93782 00:41:19.285 20:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93782 /var/tmp/bperf.sock 00:41:19.285 20:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 93782 ']' 00:41:19.285 20:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:19.285 20:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:19.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:19.285 20:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:19.285 20:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:19.285 20:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:41:19.285 20:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:41:19.285 I/O size of 131072 is greater than zero copy threshold (65536). 00:41:19.285 Zero copy mechanism will not be used. 00:41:19.285 [2024-10-01 20:40:14.474961] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:41:19.285 [2024-10-01 20:40:14.475077] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93782 ] 00:41:19.544 [2024-10-01 20:40:14.611534] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:19.544 [2024-10-01 20:40:14.666156] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:41:20.481 20:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:20.481 20:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:41:20.481 20:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:41:20.481 20:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:41:20.481 20:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:41:20.740 20:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:41:20.740 20:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:41:20.999 nvme0n1 00:41:20.999 20:40:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:41:20.999 20:40:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:41:20.999 I/O size of 131072 is greater than zero copy threshold (65536). 00:41:20.999 Zero copy mechanism will not be used. 00:41:20.999 Running I/O for 2 seconds... 00:41:23.314 7523.00 IOPS, 940.38 MiB/s 7543.00 IOPS, 942.88 MiB/s 00:41:23.314 Latency(us) 00:41:23.314 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:23.314 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:41:23.314 nvme0n1 : 2.00 7538.33 942.29 0.00 0.00 2117.55 1675.64 4408.79 00:41:23.314 =================================================================================================================== 00:41:23.314 Total : 7538.33 942.29 0.00 0.00 2117.55 1675.64 4408.79 00:41:23.314 { 00:41:23.314 "results": [ 00:41:23.314 { 00:41:23.314 "job": "nvme0n1", 00:41:23.314 "core_mask": "0x2", 00:41:23.314 "workload": "randwrite", 00:41:23.314 "status": "finished", 00:41:23.314 "queue_depth": 16, 00:41:23.314 "io_size": 131072, 00:41:23.314 "runtime": 2.003627, 00:41:23.314 "iops": 7538.329239923399, 00:41:23.314 "mibps": 942.2911549904248, 00:41:23.314 "io_failed": 0, 00:41:23.314 "io_timeout": 0, 00:41:23.314 "avg_latency_us": 2117.5479815100157, 00:41:23.314 "min_latency_us": 1675.6363636363637, 00:41:23.314 "max_latency_us": 4408.785454545455 00:41:23.314 } 00:41:23.314 ], 00:41:23.314 "core_count": 1 00:41:23.314 } 00:41:23.314 20:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:41:23.314 20:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:41:23.314 20:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:41:23.314 20:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:41:23.314 | select(.opcode=="crc32c") 00:41:23.314 | "\(.module_name) \(.executed)"' 00:41:23.314 20:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:41:23.314 20:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:41:23.314 20:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:41:23.314 20:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:41:23.314 20:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:41:23.314 20:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93782 00:41:23.314 20:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 93782 ']' 00:41:23.314 20:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 93782 00:41:23.314 20:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:41:23.314 20:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:23.314 20:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93782 00:41:23.574 killing process with pid 93782 00:41:23.574 Received shutdown signal, test time was about 2.000000 seconds 00:41:23.574 00:41:23.574 Latency(us) 00:41:23.574 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:23.574 =================================================================================================================== 00:41:23.574 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:23.574 20:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:41:23.574 20:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:41:23.574 20:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93782' 00:41:23.574 20:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 93782 00:41:23.574 20:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 93782 00:41:23.574 20:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 93497 00:41:23.575 20:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 93497 ']' 00:41:23.575 20:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 93497 00:41:23.575 20:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:41:23.575 20:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:23.575 20:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93497 00:41:23.575 killing process with pid 93497 00:41:23.575 20:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:23.575 20:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:23.575 20:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93497' 00:41:23.575 20:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 93497 00:41:23.575 20:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 93497 00:41:23.834 00:41:23.834 real 0m17.485s 00:41:23.834 user 0m34.419s 00:41:23.835 sys 0m4.186s 00:41:23.835 20:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:23.835 20:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:41:23.835 ************************************ 00:41:23.835 END TEST nvmf_digest_clean 00:41:23.835 ************************************ 00:41:23.835 20:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:41:23.835 20:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:23.835 20:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:23.835 20:40:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:41:23.835 ************************************ 00:41:23.835 START TEST nvmf_digest_error 00:41:23.835 ************************************ 00:41:23.835 20:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:41:23.835 20:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:41:23.835 20:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:41:23.835 20:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:23.835 20:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:41:23.835 20:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # nvmfpid=93901 00:41:23.835 20:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # waitforlisten 93901 00:41:23.835 20:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 93901 ']' 00:41:23.835 20:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:41:23.835 20:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:23.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:23.835 20:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:23.835 20:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:23.835 20:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:23.835 20:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:41:24.093 [2024-10-01 20:40:19.104638] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:41:24.093 [2024-10-01 20:40:19.104701] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:24.093 [2024-10-01 20:40:19.235887] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:24.093 [2024-10-01 20:40:19.284437] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:24.094 [2024-10-01 20:40:19.284500] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:24.094 [2024-10-01 20:40:19.284525] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:24.094 [2024-10-01 20:40:19.284532] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:24.094 [2024-10-01 20:40:19.284537] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:24.094 [2024-10-01 20:40:19.284566] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:41:25.029 20:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:25.029 20:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:41:25.029 20:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:41:25.029 20:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:25.029 20:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:41:25.029 20:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:25.029 20:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:41:25.029 20:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:25.029 20:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:41:25.029 [2024-10-01 20:40:20.124928] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:41:25.029 20:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:25.029 20:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:41:25.029 20:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:41:25.029 20:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:25.029 20:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:41:25.029 null0 00:41:25.029 [2024-10-01 20:40:20.236078] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:25.029 [2024-10-01 20:40:20.260200] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:41:25.029 20:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:25.029 20:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:41:25.029 20:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:41:25.029 20:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:41:25.029 20:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:41:25.029 20:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:41:25.029 20:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93946 00:41:25.029 20:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93946 /var/tmp/bperf.sock 00:41:25.029 20:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:41:25.030 20:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 93946 ']' 00:41:25.030 20:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:25.030 20:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:25.030 20:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:25.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:25.030 20:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:25.030 20:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:41:25.288 [2024-10-01 20:40:20.327169] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:41:25.288 [2024-10-01 20:40:20.327477] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93946 ] 00:41:25.288 [2024-10-01 20:40:20.469087] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:25.288 [2024-10-01 20:40:20.538583] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:41:26.226 20:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:26.226 20:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:41:26.226 20:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:41:26.226 20:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:41:26.485 20:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:41:26.485 20:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:26.485 20:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:41:26.485 20:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:26.485 20:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:41:26.485 20:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:41:26.743 nvme0n1 00:41:26.743 20:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:41:26.743 20:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:26.743 20:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:41:26.743 20:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:26.743 20:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:41:26.743 20:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:41:27.003 Running I/O for 2 seconds... 00:41:27.003 [2024-10-01 20:40:22.021922] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.003 [2024-10-01 20:40:22.021979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.003 [2024-10-01 20:40:22.021992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.003 [2024-10-01 20:40:22.035309] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.003 [2024-10-01 20:40:22.035357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.003 [2024-10-01 20:40:22.035368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.003 [2024-10-01 20:40:22.047556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.003 [2024-10-01 20:40:22.047602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.003 [2024-10-01 20:40:22.047613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.003 [2024-10-01 20:40:22.060682] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.003 [2024-10-01 20:40:22.060730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.003 [2024-10-01 20:40:22.060741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.003 [2024-10-01 20:40:22.073089] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.003 [2024-10-01 20:40:22.073146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.003 [2024-10-01 20:40:22.073158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.003 [2024-10-01 20:40:22.084953] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.003 [2024-10-01 20:40:22.084999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.003 [2024-10-01 20:40:22.085011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.003 [2024-10-01 20:40:22.097398] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.003 [2024-10-01 20:40:22.097445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.003 [2024-10-01 20:40:22.097457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.003 [2024-10-01 20:40:22.110061] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.003 [2024-10-01 20:40:22.110107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.003 [2024-10-01 20:40:22.110118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.003 [2024-10-01 20:40:22.122921] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.003 [2024-10-01 20:40:22.122968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.003 [2024-10-01 20:40:22.122980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.003 [2024-10-01 20:40:22.134540] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.003 [2024-10-01 20:40:22.134587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:25519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.003 [2024-10-01 20:40:22.134598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.003 [2024-10-01 20:40:22.147649] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.003 [2024-10-01 20:40:22.147697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.003 [2024-10-01 20:40:22.147709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.003 [2024-10-01 20:40:22.158129] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.003 [2024-10-01 20:40:22.158186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.003 [2024-10-01 20:40:22.158198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.003 [2024-10-01 20:40:22.172098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.003 [2024-10-01 20:40:22.172161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.003 [2024-10-01 20:40:22.172174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.003 [2024-10-01 20:40:22.183779] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.003 [2024-10-01 20:40:22.183847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.003 [2024-10-01 20:40:22.183859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.003 [2024-10-01 20:40:22.195940] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.003 [2024-10-01 20:40:22.195986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.003 [2024-10-01 20:40:22.195998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.003 [2024-10-01 20:40:22.207070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.003 [2024-10-01 20:40:22.207115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:26 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.003 [2024-10-01 20:40:22.207135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.003 [2024-10-01 20:40:22.218697] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.004 [2024-10-01 20:40:22.218741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.004 [2024-10-01 20:40:22.218753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.004 [2024-10-01 20:40:22.230306] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.004 [2024-10-01 20:40:22.230351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.004 [2024-10-01 20:40:22.230362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.004 [2024-10-01 20:40:22.241689] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.004 [2024-10-01 20:40:22.241735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.004 [2024-10-01 20:40:22.241746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.004 [2024-10-01 20:40:22.252870] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.004 [2024-10-01 20:40:22.252917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.004 [2024-10-01 20:40:22.252929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.263 [2024-10-01 20:40:22.265724] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.263 [2024-10-01 20:40:22.265769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.264 [2024-10-01 20:40:22.265780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.264 [2024-10-01 20:40:22.277301] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.264 [2024-10-01 20:40:22.277346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.264 [2024-10-01 20:40:22.277356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.264 [2024-10-01 20:40:22.287281] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.264 [2024-10-01 20:40:22.287326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.264 [2024-10-01 20:40:22.287338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.264 [2024-10-01 20:40:22.300772] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.264 [2024-10-01 20:40:22.300817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.264 [2024-10-01 20:40:22.300828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.264 [2024-10-01 20:40:22.311909] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.264 [2024-10-01 20:40:22.311955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.264 [2024-10-01 20:40:22.311966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.264 [2024-10-01 20:40:22.322865] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.264 [2024-10-01 20:40:22.322910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.264 [2024-10-01 20:40:22.322921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.264 [2024-10-01 20:40:22.334179] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.264 [2024-10-01 20:40:22.334222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.264 [2024-10-01 20:40:22.334233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.264 [2024-10-01 20:40:22.345913] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.264 [2024-10-01 20:40:22.345957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.264 [2024-10-01 20:40:22.345967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.264 [2024-10-01 20:40:22.357977] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.264 [2024-10-01 20:40:22.358022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.264 [2024-10-01 20:40:22.358033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.264 [2024-10-01 20:40:22.369591] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.264 [2024-10-01 20:40:22.369636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.264 [2024-10-01 20:40:22.369647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.264 [2024-10-01 20:40:22.380466] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.264 [2024-10-01 20:40:22.380528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.264 [2024-10-01 20:40:22.380539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.264 [2024-10-01 20:40:22.393133] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.264 [2024-10-01 20:40:22.393189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.264 [2024-10-01 20:40:22.393201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.264 [2024-10-01 20:40:22.405531] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.264 [2024-10-01 20:40:22.405575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.264 [2024-10-01 20:40:22.405585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.264 [2024-10-01 20:40:22.415291] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.264 [2024-10-01 20:40:22.415335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.264 [2024-10-01 20:40:22.415345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.264 [2024-10-01 20:40:22.428023] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.264 [2024-10-01 20:40:22.428069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.264 [2024-10-01 20:40:22.428080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.264 [2024-10-01 20:40:22.439453] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.264 [2024-10-01 20:40:22.439498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:42 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.264 [2024-10-01 20:40:22.439509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.264 [2024-10-01 20:40:22.449378] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.264 [2024-10-01 20:40:22.449424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.264 [2024-10-01 20:40:22.449435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.264 [2024-10-01 20:40:22.463344] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.264 [2024-10-01 20:40:22.463388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.264 [2024-10-01 20:40:22.463399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.264 [2024-10-01 20:40:22.473660] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.264 [2024-10-01 20:40:22.473705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.264 [2024-10-01 20:40:22.473715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.264 [2024-10-01 20:40:22.485628] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.264 [2024-10-01 20:40:22.485673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.264 [2024-10-01 20:40:22.485684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.264 [2024-10-01 20:40:22.496628] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.264 [2024-10-01 20:40:22.496673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.264 [2024-10-01 20:40:22.496684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.264 [2024-10-01 20:40:22.508323] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.264 [2024-10-01 20:40:22.508368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.264 [2024-10-01 20:40:22.508379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.523 [2024-10-01 20:40:22.521741] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.523 [2024-10-01 20:40:22.521786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.523 [2024-10-01 20:40:22.521796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.523 [2024-10-01 20:40:22.533227] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.523 [2024-10-01 20:40:22.533272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.523 [2024-10-01 20:40:22.533283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.523 [2024-10-01 20:40:22.544696] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.523 [2024-10-01 20:40:22.544739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.523 [2024-10-01 20:40:22.544750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.524 [2024-10-01 20:40:22.556266] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.524 [2024-10-01 20:40:22.556310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.524 [2024-10-01 20:40:22.556321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.524 [2024-10-01 20:40:22.567935] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.524 [2024-10-01 20:40:22.567982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.524 [2024-10-01 20:40:22.567994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.524 [2024-10-01 20:40:22.578670] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.524 [2024-10-01 20:40:22.578715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.524 [2024-10-01 20:40:22.578726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.524 [2024-10-01 20:40:22.591176] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.524 [2024-10-01 20:40:22.591219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.524 [2024-10-01 20:40:22.591230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.524 [2024-10-01 20:40:22.602582] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.524 [2024-10-01 20:40:22.602626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.524 [2024-10-01 20:40:22.602637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.524 [2024-10-01 20:40:22.614655] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.524 [2024-10-01 20:40:22.614700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.524 [2024-10-01 20:40:22.614711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.524 [2024-10-01 20:40:22.626737] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.524 [2024-10-01 20:40:22.626782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.524 [2024-10-01 20:40:22.626792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.524 [2024-10-01 20:40:22.636931] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.524 [2024-10-01 20:40:22.636977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.524 [2024-10-01 20:40:22.636989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.524 [2024-10-01 20:40:22.648436] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.524 [2024-10-01 20:40:22.648480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.524 [2024-10-01 20:40:22.648491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.524 [2024-10-01 20:40:22.660151] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.524 [2024-10-01 20:40:22.660236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.524 [2024-10-01 20:40:22.660264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.524 [2024-10-01 20:40:22.672443] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.524 [2024-10-01 20:40:22.672487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.524 [2024-10-01 20:40:22.672499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.524 [2024-10-01 20:40:22.684058] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.524 [2024-10-01 20:40:22.684105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.524 [2024-10-01 20:40:22.684116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.524 [2024-10-01 20:40:22.695657] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.524 [2024-10-01 20:40:22.695702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.524 [2024-10-01 20:40:22.695712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.524 [2024-10-01 20:40:22.705545] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.524 [2024-10-01 20:40:22.705589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.524 [2024-10-01 20:40:22.705600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.524 [2024-10-01 20:40:22.717499] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.524 [2024-10-01 20:40:22.717560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.524 [2024-10-01 20:40:22.717571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.524 [2024-10-01 20:40:22.730400] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.524 [2024-10-01 20:40:22.730444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.524 [2024-10-01 20:40:22.730455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.524 [2024-10-01 20:40:22.742360] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.524 [2024-10-01 20:40:22.742405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.524 [2024-10-01 20:40:22.742417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.524 [2024-10-01 20:40:22.752382] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.524 [2024-10-01 20:40:22.752426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.524 [2024-10-01 20:40:22.752437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.524 [2024-10-01 20:40:22.762851] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.524 [2024-10-01 20:40:22.762899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.524 [2024-10-01 20:40:22.762910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.524 [2024-10-01 20:40:22.775464] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.524 [2024-10-01 20:40:22.775512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.524 [2024-10-01 20:40:22.775523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.783 [2024-10-01 20:40:22.788333] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.783 [2024-10-01 20:40:22.788379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.783 [2024-10-01 20:40:22.788390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.783 [2024-10-01 20:40:22.799853] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.783 [2024-10-01 20:40:22.799900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.783 [2024-10-01 20:40:22.799911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.783 [2024-10-01 20:40:22.811384] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.783 [2024-10-01 20:40:22.811430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.783 [2024-10-01 20:40:22.811440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.784 [2024-10-01 20:40:22.822749] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.784 [2024-10-01 20:40:22.822795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.784 [2024-10-01 20:40:22.822806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.784 [2024-10-01 20:40:22.834657] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.784 [2024-10-01 20:40:22.834702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.784 [2024-10-01 20:40:22.834714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.784 [2024-10-01 20:40:22.847586] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.784 [2024-10-01 20:40:22.847631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.784 [2024-10-01 20:40:22.847643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.784 [2024-10-01 20:40:22.862073] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.784 [2024-10-01 20:40:22.862120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.784 [2024-10-01 20:40:22.862160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.784 [2024-10-01 20:40:22.875068] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.784 [2024-10-01 20:40:22.875114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.784 [2024-10-01 20:40:22.875166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.784 [2024-10-01 20:40:22.886004] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.784 [2024-10-01 20:40:22.886049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.784 [2024-10-01 20:40:22.886060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.784 [2024-10-01 20:40:22.898174] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.784 [2024-10-01 20:40:22.898219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.784 [2024-10-01 20:40:22.898229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.784 [2024-10-01 20:40:22.909762] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.784 [2024-10-01 20:40:22.909808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.784 [2024-10-01 20:40:22.909819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.784 [2024-10-01 20:40:22.923180] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.784 [2024-10-01 20:40:22.923224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.784 [2024-10-01 20:40:22.923236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.784 [2024-10-01 20:40:22.934575] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.784 [2024-10-01 20:40:22.934620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.784 [2024-10-01 20:40:22.934631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.784 [2024-10-01 20:40:22.946396] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.784 [2024-10-01 20:40:22.946441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.784 [2024-10-01 20:40:22.946451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.784 [2024-10-01 20:40:22.956891] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.784 [2024-10-01 20:40:22.956935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.784 [2024-10-01 20:40:22.956946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.784 [2024-10-01 20:40:22.969229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.784 [2024-10-01 20:40:22.969274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.784 [2024-10-01 20:40:22.969285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.784 [2024-10-01 20:40:22.980254] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.784 [2024-10-01 20:40:22.980298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.784 [2024-10-01 20:40:22.980309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.784 [2024-10-01 20:40:22.990673] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.784 [2024-10-01 20:40:22.990718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.784 [2024-10-01 20:40:22.990729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.784 21328.00 IOPS, 83.31 MiB/s [2024-10-01 20:40:23.003667] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.784 [2024-10-01 20:40:23.003711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.784 [2024-10-01 20:40:23.003723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.784 [2024-10-01 20:40:23.015562] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.784 [2024-10-01 20:40:23.015607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.784 [2024-10-01 20:40:23.015620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:27.784 [2024-10-01 20:40:23.028078] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:27.784 [2024-10-01 20:40:23.028134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:27.784 [2024-10-01 20:40:23.028147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.043 [2024-10-01 20:40:23.041243] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.043 [2024-10-01 20:40:23.041287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.043 [2024-10-01 20:40:23.041298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.043 [2024-10-01 20:40:23.052127] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.043 [2024-10-01 20:40:23.052196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.043 [2024-10-01 20:40:23.052222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.043 [2024-10-01 20:40:23.064547] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.043 [2024-10-01 20:40:23.064591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.043 [2024-10-01 20:40:23.064602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.043 [2024-10-01 20:40:23.075452] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.043 [2024-10-01 20:40:23.075496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.043 [2024-10-01 20:40:23.075507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.043 [2024-10-01 20:40:23.086627] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.043 [2024-10-01 20:40:23.086673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.043 [2024-10-01 20:40:23.086683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.043 [2024-10-01 20:40:23.097369] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.043 [2024-10-01 20:40:23.097412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.043 [2024-10-01 20:40:23.097423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.043 [2024-10-01 20:40:23.109527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.043 [2024-10-01 20:40:23.109571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.043 [2024-10-01 20:40:23.109583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.043 [2024-10-01 20:40:23.119730] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.043 [2024-10-01 20:40:23.119774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.043 [2024-10-01 20:40:23.119785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.043 [2024-10-01 20:40:23.132027] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.043 [2024-10-01 20:40:23.132074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.043 [2024-10-01 20:40:23.132086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.043 [2024-10-01 20:40:23.143860] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.043 [2024-10-01 20:40:23.143906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.043 [2024-10-01 20:40:23.143917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.043 [2024-10-01 20:40:23.155872] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.044 [2024-10-01 20:40:23.155921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.044 [2024-10-01 20:40:23.155933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.044 [2024-10-01 20:40:23.167466] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.044 [2024-10-01 20:40:23.167510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.044 [2024-10-01 20:40:23.167521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.044 [2024-10-01 20:40:23.178042] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.044 [2024-10-01 20:40:23.178087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.044 [2024-10-01 20:40:23.178098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.044 [2024-10-01 20:40:23.191226] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.044 [2024-10-01 20:40:23.191275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.044 [2024-10-01 20:40:23.191287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.044 [2024-10-01 20:40:23.203743] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.044 [2024-10-01 20:40:23.203789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.044 [2024-10-01 20:40:23.203808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.044 [2024-10-01 20:40:23.218352] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.044 [2024-10-01 20:40:23.218398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.044 [2024-10-01 20:40:23.218409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.044 [2024-10-01 20:40:23.231566] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.044 [2024-10-01 20:40:23.231613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.044 [2024-10-01 20:40:23.231624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.044 [2024-10-01 20:40:23.243789] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.044 [2024-10-01 20:40:23.243875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.044 [2024-10-01 20:40:23.243888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.044 [2024-10-01 20:40:23.256473] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.044 [2024-10-01 20:40:23.256519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.044 [2024-10-01 20:40:23.256530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.044 [2024-10-01 20:40:23.268972] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.044 [2024-10-01 20:40:23.269019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.044 [2024-10-01 20:40:23.269030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.044 [2024-10-01 20:40:23.281912] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.044 [2024-10-01 20:40:23.281959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.044 [2024-10-01 20:40:23.281985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.044 [2024-10-01 20:40:23.293062] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.044 [2024-10-01 20:40:23.293109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.044 [2024-10-01 20:40:23.293145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.303 [2024-10-01 20:40:23.305630] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.303 [2024-10-01 20:40:23.305676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.303 [2024-10-01 20:40:23.305687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.303 [2024-10-01 20:40:23.317832] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.303 [2024-10-01 20:40:23.317878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.303 [2024-10-01 20:40:23.317890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.303 [2024-10-01 20:40:23.330335] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.303 [2024-10-01 20:40:23.330381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.303 [2024-10-01 20:40:23.330392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.303 [2024-10-01 20:40:23.341991] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.303 [2024-10-01 20:40:23.342037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.303 [2024-10-01 20:40:23.342048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.303 [2024-10-01 20:40:23.354868] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.303 [2024-10-01 20:40:23.354915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.303 [2024-10-01 20:40:23.354926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.303 [2024-10-01 20:40:23.370383] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.303 [2024-10-01 20:40:23.370415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.303 [2024-10-01 20:40:23.370426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.303 [2024-10-01 20:40:23.383188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.303 [2024-10-01 20:40:23.383231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.303 [2024-10-01 20:40:23.383245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.303 [2024-10-01 20:40:23.397916] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.303 [2024-10-01 20:40:23.397950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.303 [2024-10-01 20:40:23.397960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.303 [2024-10-01 20:40:23.410228] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.303 [2024-10-01 20:40:23.410271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.303 [2024-10-01 20:40:23.410283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.303 [2024-10-01 20:40:23.422245] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.303 [2024-10-01 20:40:23.422289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.303 [2024-10-01 20:40:23.422300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.303 [2024-10-01 20:40:23.432454] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.303 [2024-10-01 20:40:23.432498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.303 [2024-10-01 20:40:23.432509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.303 [2024-10-01 20:40:23.444050] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.303 [2024-10-01 20:40:23.444096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.304 [2024-10-01 20:40:23.444107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.304 [2024-10-01 20:40:23.456217] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.304 [2024-10-01 20:40:23.456261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.304 [2024-10-01 20:40:23.456272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.304 [2024-10-01 20:40:23.469970] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.304 [2024-10-01 20:40:23.470016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.304 [2024-10-01 20:40:23.470027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.304 [2024-10-01 20:40:23.481697] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.304 [2024-10-01 20:40:23.481742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.304 [2024-10-01 20:40:23.481752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.304 [2024-10-01 20:40:23.492553] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.304 [2024-10-01 20:40:23.492597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.304 [2024-10-01 20:40:23.492608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.304 [2024-10-01 20:40:23.503551] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.304 [2024-10-01 20:40:23.503598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.304 [2024-10-01 20:40:23.503609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.304 [2024-10-01 20:40:23.515032] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.304 [2024-10-01 20:40:23.515076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.304 [2024-10-01 20:40:23.515087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.304 [2024-10-01 20:40:23.526249] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.304 [2024-10-01 20:40:23.526293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.304 [2024-10-01 20:40:23.526304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.304 [2024-10-01 20:40:23.537636] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.304 [2024-10-01 20:40:23.537680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.304 [2024-10-01 20:40:23.537691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.304 [2024-10-01 20:40:23.550083] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.304 [2024-10-01 20:40:23.550152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.304 [2024-10-01 20:40:23.550165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.563 [2024-10-01 20:40:23.562742] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.563 [2024-10-01 20:40:23.562787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.563 [2024-10-01 20:40:23.562797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.563 [2024-10-01 20:40:23.574610] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.563 [2024-10-01 20:40:23.574655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.563 [2024-10-01 20:40:23.574665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.563 [2024-10-01 20:40:23.586598] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.563 [2024-10-01 20:40:23.586643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.563 [2024-10-01 20:40:23.586653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.563 [2024-10-01 20:40:23.598838] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.563 [2024-10-01 20:40:23.598882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.563 [2024-10-01 20:40:23.598893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.563 [2024-10-01 20:40:23.610263] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.564 [2024-10-01 20:40:23.610307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.564 [2024-10-01 20:40:23.610318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.564 [2024-10-01 20:40:23.620481] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.564 [2024-10-01 20:40:23.620525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.564 [2024-10-01 20:40:23.620536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.564 [2024-10-01 20:40:23.634260] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.564 [2024-10-01 20:40:23.634305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.564 [2024-10-01 20:40:23.634316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.564 [2024-10-01 20:40:23.644903] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.564 [2024-10-01 20:40:23.644947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.564 [2024-10-01 20:40:23.644958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.564 [2024-10-01 20:40:23.655973] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.564 [2024-10-01 20:40:23.656020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.564 [2024-10-01 20:40:23.656031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.564 [2024-10-01 20:40:23.667409] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.564 [2024-10-01 20:40:23.667453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.564 [2024-10-01 20:40:23.667464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.564 [2024-10-01 20:40:23.678659] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.564 [2024-10-01 20:40:23.678705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.564 [2024-10-01 20:40:23.678715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.564 [2024-10-01 20:40:23.690612] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.564 [2024-10-01 20:40:23.690658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.564 [2024-10-01 20:40:23.690685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.564 [2024-10-01 20:40:23.702582] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.564 [2024-10-01 20:40:23.702627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.564 [2024-10-01 20:40:23.702638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.564 [2024-10-01 20:40:23.714959] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.564 [2024-10-01 20:40:23.715004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.564 [2024-10-01 20:40:23.715015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.564 [2024-10-01 20:40:23.727002] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.564 [2024-10-01 20:40:23.727047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.564 [2024-10-01 20:40:23.727057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.564 [2024-10-01 20:40:23.737884] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.564 [2024-10-01 20:40:23.737929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.564 [2024-10-01 20:40:23.737941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.564 [2024-10-01 20:40:23.749356] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.564 [2024-10-01 20:40:23.749400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.564 [2024-10-01 20:40:23.749411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.564 [2024-10-01 20:40:23.760941] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.564 [2024-10-01 20:40:23.760986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.564 [2024-10-01 20:40:23.760997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.564 [2024-10-01 20:40:23.772468] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.564 [2024-10-01 20:40:23.772512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.564 [2024-10-01 20:40:23.772523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.564 [2024-10-01 20:40:23.783658] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.564 [2024-10-01 20:40:23.783702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.564 [2024-10-01 20:40:23.783713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.564 [2024-10-01 20:40:23.796394] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.564 [2024-10-01 20:40:23.796438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.564 [2024-10-01 20:40:23.796449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.564 [2024-10-01 20:40:23.807546] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.564 [2024-10-01 20:40:23.807593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.564 [2024-10-01 20:40:23.807604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.823 [2024-10-01 20:40:23.819760] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.823 [2024-10-01 20:40:23.819830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.823 [2024-10-01 20:40:23.819843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.823 [2024-10-01 20:40:23.832326] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.823 [2024-10-01 20:40:23.832370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.823 [2024-10-01 20:40:23.832381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.823 [2024-10-01 20:40:23.842475] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.824 [2024-10-01 20:40:23.842520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.824 [2024-10-01 20:40:23.842548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.824 [2024-10-01 20:40:23.854934] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.824 [2024-10-01 20:40:23.854979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.824 [2024-10-01 20:40:23.854990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.824 [2024-10-01 20:40:23.868431] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.824 [2024-10-01 20:40:23.868493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.824 [2024-10-01 20:40:23.868505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.824 [2024-10-01 20:40:23.881898] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.824 [2024-10-01 20:40:23.881947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.824 [2024-10-01 20:40:23.881974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.824 [2024-10-01 20:40:23.894601] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.824 [2024-10-01 20:40:23.894647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.824 [2024-10-01 20:40:23.894658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.824 [2024-10-01 20:40:23.906999] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.824 [2024-10-01 20:40:23.907044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.824 [2024-10-01 20:40:23.907055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.824 [2024-10-01 20:40:23.919502] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.824 [2024-10-01 20:40:23.919547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.824 [2024-10-01 20:40:23.919558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.824 [2024-10-01 20:40:23.931120] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.824 [2024-10-01 20:40:23.931163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.824 [2024-10-01 20:40:23.931174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.824 [2024-10-01 20:40:23.942594] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.824 [2024-10-01 20:40:23.942638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.824 [2024-10-01 20:40:23.942649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.824 [2024-10-01 20:40:23.954447] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.824 [2024-10-01 20:40:23.954492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.824 [2024-10-01 20:40:23.954503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.824 [2024-10-01 20:40:23.966236] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.824 [2024-10-01 20:40:23.966279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.824 [2024-10-01 20:40:23.966290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.824 [2024-10-01 20:40:23.977807] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.824 [2024-10-01 20:40:23.977852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.824 [2024-10-01 20:40:23.977862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.824 [2024-10-01 20:40:23.988551] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.824 [2024-10-01 20:40:23.988595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.824 [2024-10-01 20:40:23.988608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.824 21272.50 IOPS, 83.10 MiB/s [2024-10-01 20:40:24.001006] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x126f600) 00:41:28.824 [2024-10-01 20:40:24.001030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:28.824 [2024-10-01 20:40:24.001057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:28.824 00:41:28.824 Latency(us) 00:41:28.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:28.824 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:41:28.824 nvme0n1 : 2.00 21289.31 83.16 0.00 0.00 6005.51 3172.54 18945.86 00:41:28.824 =================================================================================================================== 00:41:28.824 Total : 21289.31 83.16 0.00 0.00 6005.51 3172.54 18945.86 00:41:28.824 { 00:41:28.824 "results": [ 00:41:28.824 { 00:41:28.824 "job": "nvme0n1", 00:41:28.824 "core_mask": "0x2", 00:41:28.824 "workload": "randread", 00:41:28.824 "status": "finished", 00:41:28.824 "queue_depth": 128, 00:41:28.824 "io_size": 4096, 00:41:28.824 "runtime": 2.004433, 00:41:28.824 "iops": 21289.312239421324, 00:41:28.824 "mibps": 83.16137593523955, 00:41:28.824 "io_failed": 0, 00:41:28.824 "io_timeout": 0, 00:41:28.824 "avg_latency_us": 6005.513502044086, 00:41:28.824 "min_latency_us": 3172.538181818182, 00:41:28.824 "max_latency_us": 18945.861818181816 00:41:28.824 } 00:41:28.824 ], 00:41:28.824 "core_count": 1 00:41:28.824 } 00:41:28.824 20:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:41:28.824 20:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:41:28.824 20:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:41:28.824 20:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:41:28.824 | .driver_specific 00:41:28.824 | .nvme_error 00:41:28.824 | .status_code 00:41:28.824 | .command_transient_transport_error' 00:41:29.083 20:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 167 > 0 )) 00:41:29.083 20:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93946 00:41:29.083 20:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 93946 ']' 00:41:29.083 20:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 93946 00:41:29.083 20:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:41:29.083 20:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:29.083 20:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93946 00:41:29.341 20:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:41:29.341 20:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:41:29.341 killing process with pid 93946 00:41:29.341 20:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93946' 00:41:29.342 20:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 93946 00:41:29.342 Received shutdown signal, test time was about 2.000000 seconds 00:41:29.342 00:41:29.342 Latency(us) 00:41:29.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:29.342 =================================================================================================================== 00:41:29.342 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:29.342 20:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 93946 00:41:29.342 20:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:41:29.342 20:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:41:29.342 20:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:41:29.342 20:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:41:29.342 20:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:41:29.342 20:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94035 00:41:29.342 20:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94035 /var/tmp/bperf.sock 00:41:29.342 20:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:41:29.342 20:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 94035 ']' 00:41:29.342 20:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:29.342 20:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:29.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:29.342 20:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:29.342 20:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:29.342 20:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:41:29.600 I/O size of 131072 is greater than zero copy threshold (65536). 00:41:29.600 Zero copy mechanism will not be used. 00:41:29.600 [2024-10-01 20:40:24.615161] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:41:29.600 [2024-10-01 20:40:24.615267] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94035 ] 00:41:29.600 [2024-10-01 20:40:24.747055] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:29.600 [2024-10-01 20:40:24.799531] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:41:30.536 20:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:30.536 20:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:41:30.536 20:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:41:30.536 20:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:41:30.794 20:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:41:30.794 20:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:30.794 20:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:41:30.794 20:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:30.795 20:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:41:30.795 20:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:41:31.053 nvme0n1 00:41:31.053 20:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:41:31.053 20:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:31.053 20:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:41:31.053 20:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:31.053 20:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:41:31.053 20:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:41:31.053 I/O size of 131072 is greater than zero copy threshold (65536). 00:41:31.053 Zero copy mechanism will not be used. 00:41:31.053 Running I/O for 2 seconds... 00:41:31.053 [2024-10-01 20:40:26.289906] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.053 [2024-10-01 20:40:26.289962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.053 [2024-10-01 20:40:26.289976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.053 [2024-10-01 20:40:26.294388] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.053 [2024-10-01 20:40:26.294423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.053 [2024-10-01 20:40:26.294434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.053 [2024-10-01 20:40:26.298622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.053 [2024-10-01 20:40:26.298668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.053 [2024-10-01 20:40:26.298679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.053 [2024-10-01 20:40:26.302732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.053 [2024-10-01 20:40:26.302793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.053 [2024-10-01 20:40:26.302805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.314 [2024-10-01 20:40:26.305981] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.314 [2024-10-01 20:40:26.306029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.314 [2024-10-01 20:40:26.306041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.314 [2024-10-01 20:40:26.310721] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.314 [2024-10-01 20:40:26.310769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.314 [2024-10-01 20:40:26.310796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.314 [2024-10-01 20:40:26.314258] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.314 [2024-10-01 20:40:26.314305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.314 [2024-10-01 20:40:26.314316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.314 [2024-10-01 20:40:26.317506] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.314 [2024-10-01 20:40:26.317553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.314 [2024-10-01 20:40:26.317564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.314 [2024-10-01 20:40:26.321537] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.314 [2024-10-01 20:40:26.321585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.314 [2024-10-01 20:40:26.321596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.314 [2024-10-01 20:40:26.326005] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.314 [2024-10-01 20:40:26.326051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.314 [2024-10-01 20:40:26.326062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.314 [2024-10-01 20:40:26.329170] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.314 [2024-10-01 20:40:26.329214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.314 [2024-10-01 20:40:26.329224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.314 [2024-10-01 20:40:26.332714] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.314 [2024-10-01 20:40:26.332760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.314 [2024-10-01 20:40:26.332772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.314 [2024-10-01 20:40:26.336613] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.314 [2024-10-01 20:40:26.336659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.314 [2024-10-01 20:40:26.336670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.314 [2024-10-01 20:40:26.339882] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.314 [2024-10-01 20:40:26.339929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.314 [2024-10-01 20:40:26.339941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.314 [2024-10-01 20:40:26.342697] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.314 [2024-10-01 20:40:26.342742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.314 [2024-10-01 20:40:26.342753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.314 [2024-10-01 20:40:26.346556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.314 [2024-10-01 20:40:26.346601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.314 [2024-10-01 20:40:26.346612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.314 [2024-10-01 20:40:26.350411] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.314 [2024-10-01 20:40:26.350459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.314 [2024-10-01 20:40:26.350469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.314 [2024-10-01 20:40:26.353634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.314 [2024-10-01 20:40:26.353680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.315 [2024-10-01 20:40:26.353691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.315 [2024-10-01 20:40:26.357559] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.315 [2024-10-01 20:40:26.357605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.315 [2024-10-01 20:40:26.357616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.315 [2024-10-01 20:40:26.360705] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.315 [2024-10-01 20:40:26.360751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.315 [2024-10-01 20:40:26.360761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.315 [2024-10-01 20:40:26.364519] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.315 [2024-10-01 20:40:26.364564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.315 [2024-10-01 20:40:26.364576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.315 [2024-10-01 20:40:26.368455] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.315 [2024-10-01 20:40:26.368500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.315 [2024-10-01 20:40:26.368511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.315 [2024-10-01 20:40:26.371576] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.315 [2024-10-01 20:40:26.371621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.315 [2024-10-01 20:40:26.371631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.315 [2024-10-01 20:40:26.375363] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.315 [2024-10-01 20:40:26.375410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.315 [2024-10-01 20:40:26.375421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.315 [2024-10-01 20:40:26.379611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.315 [2024-10-01 20:40:26.379657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.315 [2024-10-01 20:40:26.379668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.315 [2024-10-01 20:40:26.383559] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.315 [2024-10-01 20:40:26.383605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.315 [2024-10-01 20:40:26.383616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.315 [2024-10-01 20:40:26.385978] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.315 [2024-10-01 20:40:26.386023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.315 [2024-10-01 20:40:26.386034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.315 [2024-10-01 20:40:26.390289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.315 [2024-10-01 20:40:26.390336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.315 [2024-10-01 20:40:26.390347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.315 [2024-10-01 20:40:26.394296] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.315 [2024-10-01 20:40:26.394342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.315 [2024-10-01 20:40:26.394353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.315 [2024-10-01 20:40:26.397891] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.315 [2024-10-01 20:40:26.397937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.315 [2024-10-01 20:40:26.397948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.315 [2024-10-01 20:40:26.400837] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.315 [2024-10-01 20:40:26.400882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.315 [2024-10-01 20:40:26.400893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.315 [2024-10-01 20:40:26.405237] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.315 [2024-10-01 20:40:26.405284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.315 [2024-10-01 20:40:26.405295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.315 [2024-10-01 20:40:26.409421] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.315 [2024-10-01 20:40:26.409468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.315 [2024-10-01 20:40:26.409479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.315 [2024-10-01 20:40:26.413775] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.315 [2024-10-01 20:40:26.413821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.315 [2024-10-01 20:40:26.413832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.315 [2024-10-01 20:40:26.416933] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.315 [2024-10-01 20:40:26.416977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.315 [2024-10-01 20:40:26.416988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.315 [2024-10-01 20:40:26.420820] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.315 [2024-10-01 20:40:26.420866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.315 [2024-10-01 20:40:26.420877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.315 [2024-10-01 20:40:26.424933] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.315 [2024-10-01 20:40:26.424979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.315 [2024-10-01 20:40:26.424989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.315 [2024-10-01 20:40:26.429169] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.315 [2024-10-01 20:40:26.429214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.315 [2024-10-01 20:40:26.429225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.315 [2024-10-01 20:40:26.431504] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.315 [2024-10-01 20:40:26.431548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.316 [2024-10-01 20:40:26.431558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.316 [2024-10-01 20:40:26.435176] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.316 [2024-10-01 20:40:26.435221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.316 [2024-10-01 20:40:26.435231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.316 [2024-10-01 20:40:26.438015] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.316 [2024-10-01 20:40:26.438060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.316 [2024-10-01 20:40:26.438071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.316 [2024-10-01 20:40:26.441676] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.316 [2024-10-01 20:40:26.441722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.316 [2024-10-01 20:40:26.441733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.316 [2024-10-01 20:40:26.445788] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.316 [2024-10-01 20:40:26.445834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.316 [2024-10-01 20:40:26.445845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.316 [2024-10-01 20:40:26.448863] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.316 [2024-10-01 20:40:26.448908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.316 [2024-10-01 20:40:26.448919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.316 [2024-10-01 20:40:26.452563] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.316 [2024-10-01 20:40:26.452608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.316 [2024-10-01 20:40:26.452619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.316 [2024-10-01 20:40:26.456959] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.316 [2024-10-01 20:40:26.457005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.316 [2024-10-01 20:40:26.457016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.316 [2024-10-01 20:40:26.461281] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.316 [2024-10-01 20:40:26.461328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.316 [2024-10-01 20:40:26.461339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.316 [2024-10-01 20:40:26.464329] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.316 [2024-10-01 20:40:26.464373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.316 [2024-10-01 20:40:26.464384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.316 [2024-10-01 20:40:26.467918] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.316 [2024-10-01 20:40:26.467965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.316 [2024-10-01 20:40:26.467976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.316 [2024-10-01 20:40:26.472013] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.316 [2024-10-01 20:40:26.472061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.316 [2024-10-01 20:40:26.472072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.316 [2024-10-01 20:40:26.476290] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.316 [2024-10-01 20:40:26.476335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.316 [2024-10-01 20:40:26.476346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.316 [2024-10-01 20:40:26.480257] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.316 [2024-10-01 20:40:26.480303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.316 [2024-10-01 20:40:26.480313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.316 [2024-10-01 20:40:26.482800] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.316 [2024-10-01 20:40:26.482845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.316 [2024-10-01 20:40:26.482855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.316 [2024-10-01 20:40:26.486852] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.316 [2024-10-01 20:40:26.486897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.316 [2024-10-01 20:40:26.486909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.316 [2024-10-01 20:40:26.490743] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.316 [2024-10-01 20:40:26.490789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.316 [2024-10-01 20:40:26.490799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.316 [2024-10-01 20:40:26.493604] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.316 [2024-10-01 20:40:26.493649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.316 [2024-10-01 20:40:26.493660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.316 [2024-10-01 20:40:26.497477] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.316 [2024-10-01 20:40:26.497524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.316 [2024-10-01 20:40:26.497551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.316 [2024-10-01 20:40:26.501286] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.316 [2024-10-01 20:40:26.501333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.316 [2024-10-01 20:40:26.501344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.316 [2024-10-01 20:40:26.504615] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.316 [2024-10-01 20:40:26.504660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.316 [2024-10-01 20:40:26.504671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.316 [2024-10-01 20:40:26.508085] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.316 [2024-10-01 20:40:26.508158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.316 [2024-10-01 20:40:26.508185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.316 [2024-10-01 20:40:26.511125] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.316 [2024-10-01 20:40:26.511181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.316 [2024-10-01 20:40:26.511192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.316 [2024-10-01 20:40:26.514985] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.316 [2024-10-01 20:40:26.515031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.317 [2024-10-01 20:40:26.515041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.317 [2024-10-01 20:40:26.518486] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.317 [2024-10-01 20:40:26.518531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.317 [2024-10-01 20:40:26.518542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.317 [2024-10-01 20:40:26.521948] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.317 [2024-10-01 20:40:26.521993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.317 [2024-10-01 20:40:26.522004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.317 [2024-10-01 20:40:26.525920] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.317 [2024-10-01 20:40:26.525966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.317 [2024-10-01 20:40:26.525977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.317 [2024-10-01 20:40:26.528858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.317 [2024-10-01 20:40:26.528903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.317 [2024-10-01 20:40:26.528914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.317 [2024-10-01 20:40:26.532633] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.317 [2024-10-01 20:40:26.532679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.317 [2024-10-01 20:40:26.532690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.317 [2024-10-01 20:40:26.536808] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.317 [2024-10-01 20:40:26.536854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.317 [2024-10-01 20:40:26.536865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.317 [2024-10-01 20:40:26.539910] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.317 [2024-10-01 20:40:26.539955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.317 [2024-10-01 20:40:26.539966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.317 [2024-10-01 20:40:26.543332] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.317 [2024-10-01 20:40:26.543377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.317 [2024-10-01 20:40:26.543387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.317 [2024-10-01 20:40:26.547053] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.317 [2024-10-01 20:40:26.547097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.317 [2024-10-01 20:40:26.547108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.317 [2024-10-01 20:40:26.550901] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.317 [2024-10-01 20:40:26.550947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.317 [2024-10-01 20:40:26.550958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.317 [2024-10-01 20:40:26.553769] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.317 [2024-10-01 20:40:26.553814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.317 [2024-10-01 20:40:26.553825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.317 [2024-10-01 20:40:26.557625] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.317 [2024-10-01 20:40:26.557671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.317 [2024-10-01 20:40:26.557682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.317 [2024-10-01 20:40:26.561294] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.317 [2024-10-01 20:40:26.561355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.317 [2024-10-01 20:40:26.561367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.577 [2024-10-01 20:40:26.565387] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.577 [2024-10-01 20:40:26.565433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.577 [2024-10-01 20:40:26.565445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.577 [2024-10-01 20:40:26.569172] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.577 [2024-10-01 20:40:26.569216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.577 [2024-10-01 20:40:26.569227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.577 [2024-10-01 20:40:26.572640] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.577 [2024-10-01 20:40:26.572718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.577 [2024-10-01 20:40:26.572730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.577 [2024-10-01 20:40:26.576648] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.577 [2024-10-01 20:40:26.576694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.577 [2024-10-01 20:40:26.576705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.577 [2024-10-01 20:40:26.579843] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.577 [2024-10-01 20:40:26.579890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.577 [2024-10-01 20:40:26.579901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.577 [2024-10-01 20:40:26.583790] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.577 [2024-10-01 20:40:26.583859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.577 [2024-10-01 20:40:26.583871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.577 [2024-10-01 20:40:26.588345] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.577 [2024-10-01 20:40:26.588389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.577 [2024-10-01 20:40:26.588400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.577 [2024-10-01 20:40:26.591198] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.577 [2024-10-01 20:40:26.591241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.577 [2024-10-01 20:40:26.591252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.577 [2024-10-01 20:40:26.594843] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.577 [2024-10-01 20:40:26.594888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.577 [2024-10-01 20:40:26.594898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.577 [2024-10-01 20:40:26.598316] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.577 [2024-10-01 20:40:26.598361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.577 [2024-10-01 20:40:26.598373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.577 [2024-10-01 20:40:26.602019] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.577 [2024-10-01 20:40:26.602064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.577 [2024-10-01 20:40:26.602075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.577 [2024-10-01 20:40:26.605895] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.577 [2024-10-01 20:40:26.605940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.577 [2024-10-01 20:40:26.605951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.577 [2024-10-01 20:40:26.609287] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.577 [2024-10-01 20:40:26.609333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.577 [2024-10-01 20:40:26.609344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.577 [2024-10-01 20:40:26.613335] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.577 [2024-10-01 20:40:26.613382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.577 [2024-10-01 20:40:26.613393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.577 [2024-10-01 20:40:26.616588] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.577 [2024-10-01 20:40:26.616632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.577 [2024-10-01 20:40:26.616642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.577 [2024-10-01 20:40:26.620316] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.577 [2024-10-01 20:40:26.620361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.577 [2024-10-01 20:40:26.620371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.577 [2024-10-01 20:40:26.623779] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.577 [2024-10-01 20:40:26.623847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.577 [2024-10-01 20:40:26.623874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.577 [2024-10-01 20:40:26.626796] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.577 [2024-10-01 20:40:26.626841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.577 [2024-10-01 20:40:26.626851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.577 [2024-10-01 20:40:26.631203] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.577 [2024-10-01 20:40:26.631248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.577 [2024-10-01 20:40:26.631258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.577 [2024-10-01 20:40:26.635337] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.577 [2024-10-01 20:40:26.635383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.577 [2024-10-01 20:40:26.635394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.578 [2024-10-01 20:40:26.639474] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.578 [2024-10-01 20:40:26.639520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.578 [2024-10-01 20:40:26.639531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.578 [2024-10-01 20:40:26.641907] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.578 [2024-10-01 20:40:26.641951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.578 [2024-10-01 20:40:26.641961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.578 [2024-10-01 20:40:26.646102] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.578 [2024-10-01 20:40:26.646158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.578 [2024-10-01 20:40:26.646170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.578 [2024-10-01 20:40:26.650546] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.578 [2024-10-01 20:40:26.650591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.578 [2024-10-01 20:40:26.650602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.578 [2024-10-01 20:40:26.655052] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.578 [2024-10-01 20:40:26.655099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.578 [2024-10-01 20:40:26.655110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.578 [2024-10-01 20:40:26.658242] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.578 [2024-10-01 20:40:26.658288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.578 [2024-10-01 20:40:26.658298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.578 [2024-10-01 20:40:26.661746] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.578 [2024-10-01 20:40:26.661791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.578 [2024-10-01 20:40:26.661802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.578 [2024-10-01 20:40:26.665635] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.578 [2024-10-01 20:40:26.665679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.578 [2024-10-01 20:40:26.665690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.578 [2024-10-01 20:40:26.669286] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.578 [2024-10-01 20:40:26.669330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.578 [2024-10-01 20:40:26.669340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.578 [2024-10-01 20:40:26.672248] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.578 [2024-10-01 20:40:26.672290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.578 [2024-10-01 20:40:26.672300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.578 [2024-10-01 20:40:26.675682] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.578 [2024-10-01 20:40:26.675727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.578 [2024-10-01 20:40:26.675737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.578 [2024-10-01 20:40:26.679154] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.578 [2024-10-01 20:40:26.679199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.578 [2024-10-01 20:40:26.679209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.578 [2024-10-01 20:40:26.682704] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.578 [2024-10-01 20:40:26.682749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.578 [2024-10-01 20:40:26.682759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.578 [2024-10-01 20:40:26.686182] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.578 [2024-10-01 20:40:26.686226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.578 [2024-10-01 20:40:26.686236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.578 [2024-10-01 20:40:26.689574] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.578 [2024-10-01 20:40:26.689620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.578 [2024-10-01 20:40:26.689630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.578 [2024-10-01 20:40:26.692876] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.578 [2024-10-01 20:40:26.692922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.578 [2024-10-01 20:40:26.692933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.578 [2024-10-01 20:40:26.696784] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.578 [2024-10-01 20:40:26.696830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.578 [2024-10-01 20:40:26.696840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.578 [2024-10-01 20:40:26.700215] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.578 [2024-10-01 20:40:26.700259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.578 [2024-10-01 20:40:26.700270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.578 [2024-10-01 20:40:26.703436] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.578 [2024-10-01 20:40:26.703482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.578 [2024-10-01 20:40:26.703492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.578 [2024-10-01 20:40:26.706477] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.578 [2024-10-01 20:40:26.706523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.578 [2024-10-01 20:40:26.706533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.578 [2024-10-01 20:40:26.710632] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.578 [2024-10-01 20:40:26.710677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.578 [2024-10-01 20:40:26.710687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.578 [2024-10-01 20:40:26.713817] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.578 [2024-10-01 20:40:26.713862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.578 [2024-10-01 20:40:26.713873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.578 [2024-10-01 20:40:26.717373] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.578 [2024-10-01 20:40:26.717419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.579 [2024-10-01 20:40:26.717430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.579 [2024-10-01 20:40:26.720993] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.579 [2024-10-01 20:40:26.721038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.579 [2024-10-01 20:40:26.721049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.579 [2024-10-01 20:40:26.724038] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.579 [2024-10-01 20:40:26.724071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.579 [2024-10-01 20:40:26.724083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.579 [2024-10-01 20:40:26.727317] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.579 [2024-10-01 20:40:26.727362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.579 [2024-10-01 20:40:26.727372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.579 [2024-10-01 20:40:26.731403] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.579 [2024-10-01 20:40:26.731450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.579 [2024-10-01 20:40:26.731461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.579 [2024-10-01 20:40:26.734310] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.579 [2024-10-01 20:40:26.734356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.579 [2024-10-01 20:40:26.734367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.579 [2024-10-01 20:40:26.738536] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.579 [2024-10-01 20:40:26.738581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.579 [2024-10-01 20:40:26.738592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.579 [2024-10-01 20:40:26.742609] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.579 [2024-10-01 20:40:26.742654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.579 [2024-10-01 20:40:26.742665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.579 [2024-10-01 20:40:26.746635] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.579 [2024-10-01 20:40:26.746680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.579 [2024-10-01 20:40:26.746691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.579 [2024-10-01 20:40:26.750854] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.579 [2024-10-01 20:40:26.750900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.579 [2024-10-01 20:40:26.750910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.579 [2024-10-01 20:40:26.753821] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.579 [2024-10-01 20:40:26.753865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.579 [2024-10-01 20:40:26.753876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.579 [2024-10-01 20:40:26.757660] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.579 [2024-10-01 20:40:26.757706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.579 [2024-10-01 20:40:26.757717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.579 [2024-10-01 20:40:26.761881] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.579 [2024-10-01 20:40:26.761926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.579 [2024-10-01 20:40:26.761937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.579 [2024-10-01 20:40:26.765970] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.579 [2024-10-01 20:40:26.766015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.579 [2024-10-01 20:40:26.766026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.579 [2024-10-01 20:40:26.770372] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.579 [2024-10-01 20:40:26.770417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.579 [2024-10-01 20:40:26.770428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.579 [2024-10-01 20:40:26.773563] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.579 [2024-10-01 20:40:26.773608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.579 [2024-10-01 20:40:26.773618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.579 [2024-10-01 20:40:26.777404] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.579 [2024-10-01 20:40:26.777450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.579 [2024-10-01 20:40:26.777461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.579 [2024-10-01 20:40:26.781835] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.579 [2024-10-01 20:40:26.781880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.579 [2024-10-01 20:40:26.781891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.579 [2024-10-01 20:40:26.786115] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.579 [2024-10-01 20:40:26.786170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.579 [2024-10-01 20:40:26.786181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.579 [2024-10-01 20:40:26.788571] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.579 [2024-10-01 20:40:26.788615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.579 [2024-10-01 20:40:26.788626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.579 [2024-10-01 20:40:26.792507] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.579 [2024-10-01 20:40:26.792552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.579 [2024-10-01 20:40:26.792563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.579 [2024-10-01 20:40:26.796935] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.579 [2024-10-01 20:40:26.796980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.579 [2024-10-01 20:40:26.796991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.579 [2024-10-01 20:40:26.801240] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.579 [2024-10-01 20:40:26.801285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.579 [2024-10-01 20:40:26.801296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.579 [2024-10-01 20:40:26.804312] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.579 [2024-10-01 20:40:26.804356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.579 [2024-10-01 20:40:26.804366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.580 [2024-10-01 20:40:26.808269] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.580 [2024-10-01 20:40:26.808315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.580 [2024-10-01 20:40:26.808326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.580 [2024-10-01 20:40:26.812534] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.580 [2024-10-01 20:40:26.812580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.580 [2024-10-01 20:40:26.812591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.580 [2024-10-01 20:40:26.815564] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.580 [2024-10-01 20:40:26.815609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.580 [2024-10-01 20:40:26.815620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.580 [2024-10-01 20:40:26.819171] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.580 [2024-10-01 20:40:26.819215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.580 [2024-10-01 20:40:26.819225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.580 [2024-10-01 20:40:26.823182] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.580 [2024-10-01 20:40:26.823226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.580 [2024-10-01 20:40:26.823237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.842 [2024-10-01 20:40:26.828510] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.842 [2024-10-01 20:40:26.828558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.842 [2024-10-01 20:40:26.828569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.842 [2024-10-01 20:40:26.831718] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.842 [2024-10-01 20:40:26.831764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.842 [2024-10-01 20:40:26.831774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.842 [2024-10-01 20:40:26.835635] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.842 [2024-10-01 20:40:26.835681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.842 [2024-10-01 20:40:26.835692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.842 [2024-10-01 20:40:26.840662] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.842 [2024-10-01 20:40:26.840710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.842 [2024-10-01 20:40:26.840721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.842 [2024-10-01 20:40:26.844257] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.842 [2024-10-01 20:40:26.844307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.842 [2024-10-01 20:40:26.844320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.842 [2024-10-01 20:40:26.848217] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.842 [2024-10-01 20:40:26.848280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.842 [2024-10-01 20:40:26.848292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.842 [2024-10-01 20:40:26.853063] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.842 [2024-10-01 20:40:26.853110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.842 [2024-10-01 20:40:26.853165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.842 [2024-10-01 20:40:26.856336] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.842 [2024-10-01 20:40:26.856384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.842 [2024-10-01 20:40:26.856396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.842 [2024-10-01 20:40:26.860651] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.842 [2024-10-01 20:40:26.860697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.842 [2024-10-01 20:40:26.860708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.842 [2024-10-01 20:40:26.864718] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.842 [2024-10-01 20:40:26.864764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.842 [2024-10-01 20:40:26.864774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.842 [2024-10-01 20:40:26.867755] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.842 [2024-10-01 20:40:26.867801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.842 [2024-10-01 20:40:26.867853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.842 [2024-10-01 20:40:26.871768] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.842 [2024-10-01 20:40:26.871840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.842 [2024-10-01 20:40:26.871854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.842 [2024-10-01 20:40:26.876049] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.842 [2024-10-01 20:40:26.876087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.842 [2024-10-01 20:40:26.876100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.842 [2024-10-01 20:40:26.878998] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.842 [2024-10-01 20:40:26.879044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.842 [2024-10-01 20:40:26.879055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.842 [2024-10-01 20:40:26.882396] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.842 [2024-10-01 20:40:26.882443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.842 [2024-10-01 20:40:26.882454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.842 [2024-10-01 20:40:26.886236] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.842 [2024-10-01 20:40:26.886282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.842 [2024-10-01 20:40:26.886294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.842 [2024-10-01 20:40:26.890101] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.843 [2024-10-01 20:40:26.890157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.843 [2024-10-01 20:40:26.890168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.843 [2024-10-01 20:40:26.893736] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.843 [2024-10-01 20:40:26.893782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.843 [2024-10-01 20:40:26.893793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.843 [2024-10-01 20:40:26.897667] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.843 [2024-10-01 20:40:26.897714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.843 [2024-10-01 20:40:26.897725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.843 [2024-10-01 20:40:26.901637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.843 [2024-10-01 20:40:26.901683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.843 [2024-10-01 20:40:26.901694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.843 [2024-10-01 20:40:26.905299] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.843 [2024-10-01 20:40:26.905355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.843 [2024-10-01 20:40:26.905373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.843 [2024-10-01 20:40:26.909337] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.843 [2024-10-01 20:40:26.909383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.843 [2024-10-01 20:40:26.909394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.843 [2024-10-01 20:40:26.912946] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.843 [2024-10-01 20:40:26.912993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.843 [2024-10-01 20:40:26.913004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.843 [2024-10-01 20:40:26.916427] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.843 [2024-10-01 20:40:26.916474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.843 [2024-10-01 20:40:26.916485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.843 [2024-10-01 20:40:26.920555] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.843 [2024-10-01 20:40:26.920601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.843 [2024-10-01 20:40:26.920611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.843 [2024-10-01 20:40:26.924290] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.843 [2024-10-01 20:40:26.924338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.843 [2024-10-01 20:40:26.924349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.843 [2024-10-01 20:40:26.927005] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.843 [2024-10-01 20:40:26.927051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.843 [2024-10-01 20:40:26.927063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.843 [2024-10-01 20:40:26.931084] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.843 [2024-10-01 20:40:26.931159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.843 [2024-10-01 20:40:26.931171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.843 [2024-10-01 20:40:26.934470] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.843 [2024-10-01 20:40:26.934518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.843 [2024-10-01 20:40:26.934530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.843 [2024-10-01 20:40:26.938850] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.843 [2024-10-01 20:40:26.938911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.843 [2024-10-01 20:40:26.938923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.843 [2024-10-01 20:40:26.942232] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.843 [2024-10-01 20:40:26.942282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.843 [2024-10-01 20:40:26.942294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.843 [2024-10-01 20:40:26.947374] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.843 [2024-10-01 20:40:26.947425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.843 [2024-10-01 20:40:26.947438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.843 [2024-10-01 20:40:26.951975] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.843 [2024-10-01 20:40:26.952029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.843 [2024-10-01 20:40:26.952042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.843 [2024-10-01 20:40:26.955315] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.843 [2024-10-01 20:40:26.955366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.843 [2024-10-01 20:40:26.955379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.843 [2024-10-01 20:40:26.960133] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.843 [2024-10-01 20:40:26.960193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.843 [2024-10-01 20:40:26.960206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.843 [2024-10-01 20:40:26.964266] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.843 [2024-10-01 20:40:26.964314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.843 [2024-10-01 20:40:26.964325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.843 [2024-10-01 20:40:26.967564] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.843 [2024-10-01 20:40:26.967611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.843 [2024-10-01 20:40:26.967621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.843 [2024-10-01 20:40:26.972267] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.843 [2024-10-01 20:40:26.972316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.843 [2024-10-01 20:40:26.972328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.843 [2024-10-01 20:40:26.977312] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.843 [2024-10-01 20:40:26.977361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.843 [2024-10-01 20:40:26.977372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.844 [2024-10-01 20:40:26.980557] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.844 [2024-10-01 20:40:26.980602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.844 [2024-10-01 20:40:26.980613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.844 [2024-10-01 20:40:26.984115] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.844 [2024-10-01 20:40:26.984206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.844 [2024-10-01 20:40:26.984218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.844 [2024-10-01 20:40:26.988842] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.844 [2024-10-01 20:40:26.988889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.844 [2024-10-01 20:40:26.988900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.844 [2024-10-01 20:40:26.993647] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.844 [2024-10-01 20:40:26.993694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.844 [2024-10-01 20:40:26.993705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.844 [2024-10-01 20:40:26.996773] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.844 [2024-10-01 20:40:26.996819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.844 [2024-10-01 20:40:26.996830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.844 [2024-10-01 20:40:27.000736] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.844 [2024-10-01 20:40:27.000782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.844 [2024-10-01 20:40:27.000793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.844 [2024-10-01 20:40:27.005065] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.844 [2024-10-01 20:40:27.005112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.844 [2024-10-01 20:40:27.005149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.844 [2024-10-01 20:40:27.009803] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.844 [2024-10-01 20:40:27.009851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.844 [2024-10-01 20:40:27.009863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.844 [2024-10-01 20:40:27.013143] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.844 [2024-10-01 20:40:27.013199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.844 [2024-10-01 20:40:27.013211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.844 [2024-10-01 20:40:27.017082] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.844 [2024-10-01 20:40:27.017154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.844 [2024-10-01 20:40:27.017167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.844 [2024-10-01 20:40:27.021488] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.844 [2024-10-01 20:40:27.021552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.844 [2024-10-01 20:40:27.021563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.844 [2024-10-01 20:40:27.026041] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.844 [2024-10-01 20:40:27.026087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.844 [2024-10-01 20:40:27.026098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.844 [2024-10-01 20:40:27.028758] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.844 [2024-10-01 20:40:27.028803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.844 [2024-10-01 20:40:27.028814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.844 [2024-10-01 20:40:27.033333] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.844 [2024-10-01 20:40:27.033381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.844 [2024-10-01 20:40:27.033392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.844 [2024-10-01 20:40:27.037761] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.844 [2024-10-01 20:40:27.037807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.844 [2024-10-01 20:40:27.037818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.844 [2024-10-01 20:40:27.040767] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.844 [2024-10-01 20:40:27.040812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.844 [2024-10-01 20:40:27.040823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.844 [2024-10-01 20:40:27.044258] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.844 [2024-10-01 20:40:27.044303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.844 [2024-10-01 20:40:27.044313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.844 [2024-10-01 20:40:27.048759] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.844 [2024-10-01 20:40:27.048806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.844 [2024-10-01 20:40:27.048816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.844 [2024-10-01 20:40:27.052837] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.844 [2024-10-01 20:40:27.052883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.844 [2024-10-01 20:40:27.052894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.844 [2024-10-01 20:40:27.055210] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.844 [2024-10-01 20:40:27.055253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.844 [2024-10-01 20:40:27.055263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.844 [2024-10-01 20:40:27.059428] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.844 [2024-10-01 20:40:27.059475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.844 [2024-10-01 20:40:27.059486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.844 [2024-10-01 20:40:27.063260] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.844 [2024-10-01 20:40:27.063322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.844 [2024-10-01 20:40:27.063333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.844 [2024-10-01 20:40:27.066330] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.845 [2024-10-01 20:40:27.066377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.845 [2024-10-01 20:40:27.066388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.845 [2024-10-01 20:40:27.069994] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.845 [2024-10-01 20:40:27.070040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.845 [2024-10-01 20:40:27.070051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.845 [2024-10-01 20:40:27.073563] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.845 [2024-10-01 20:40:27.073609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.845 [2024-10-01 20:40:27.073620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:31.845 [2024-10-01 20:40:27.077387] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.845 [2024-10-01 20:40:27.077433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.845 [2024-10-01 20:40:27.077444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:31.845 [2024-10-01 20:40:27.080982] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.845 [2024-10-01 20:40:27.081027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.845 [2024-10-01 20:40:27.081038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:31.845 [2024-10-01 20:40:27.084475] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.845 [2024-10-01 20:40:27.084520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.845 [2024-10-01 20:40:27.084531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:31.845 [2024-10-01 20:40:27.088637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:31.845 [2024-10-01 20:40:27.088683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:31.845 [2024-10-01 20:40:27.088693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.106 [2024-10-01 20:40:27.092666] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.106 [2024-10-01 20:40:27.092728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.106 [2024-10-01 20:40:27.092739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.106 [2024-10-01 20:40:27.096528] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.106 [2024-10-01 20:40:27.096574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.106 [2024-10-01 20:40:27.096585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.106 [2024-10-01 20:40:27.100283] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.106 [2024-10-01 20:40:27.100329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.106 [2024-10-01 20:40:27.100341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.106 [2024-10-01 20:40:27.103799] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.106 [2024-10-01 20:40:27.103887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.106 [2024-10-01 20:40:27.103899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.106 [2024-10-01 20:40:27.107312] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.106 [2024-10-01 20:40:27.107357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.106 [2024-10-01 20:40:27.107368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.106 [2024-10-01 20:40:27.110218] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.106 [2024-10-01 20:40:27.110263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.106 [2024-10-01 20:40:27.110274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.106 [2024-10-01 20:40:27.114077] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.106 [2024-10-01 20:40:27.114148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.106 [2024-10-01 20:40:27.114161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.106 [2024-10-01 20:40:27.117719] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.106 [2024-10-01 20:40:27.117764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.106 [2024-10-01 20:40:27.117774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.106 [2024-10-01 20:40:27.120545] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.106 [2024-10-01 20:40:27.120605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.106 [2024-10-01 20:40:27.120616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.106 [2024-10-01 20:40:27.124523] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.106 [2024-10-01 20:40:27.124569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.106 [2024-10-01 20:40:27.124579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.106 [2024-10-01 20:40:27.128707] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.106 [2024-10-01 20:40:27.128753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.106 [2024-10-01 20:40:27.128765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.106 [2024-10-01 20:40:27.132863] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.106 [2024-10-01 20:40:27.132909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.106 [2024-10-01 20:40:27.132919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.106 [2024-10-01 20:40:27.135432] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.106 [2024-10-01 20:40:27.135476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.106 [2024-10-01 20:40:27.135487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.106 [2024-10-01 20:40:27.139023] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.106 [2024-10-01 20:40:27.139069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.106 [2024-10-01 20:40:27.139080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.106 [2024-10-01 20:40:27.142295] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.106 [2024-10-01 20:40:27.142340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.106 [2024-10-01 20:40:27.142351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.106 [2024-10-01 20:40:27.145537] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.107 [2024-10-01 20:40:27.145582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.107 [2024-10-01 20:40:27.145593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.107 [2024-10-01 20:40:27.149366] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.107 [2024-10-01 20:40:27.149411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.107 [2024-10-01 20:40:27.149422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.107 [2024-10-01 20:40:27.152652] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.107 [2024-10-01 20:40:27.152698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.107 [2024-10-01 20:40:27.152708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.107 [2024-10-01 20:40:27.156715] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.107 [2024-10-01 20:40:27.156760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.107 [2024-10-01 20:40:27.156771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.107 [2024-10-01 20:40:27.159934] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.107 [2024-10-01 20:40:27.159981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.107 [2024-10-01 20:40:27.159993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.107 [2024-10-01 20:40:27.163720] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.107 [2024-10-01 20:40:27.163765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.107 [2024-10-01 20:40:27.163775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.107 [2024-10-01 20:40:27.167334] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.107 [2024-10-01 20:40:27.167379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.107 [2024-10-01 20:40:27.167390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.107 [2024-10-01 20:40:27.170292] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.107 [2024-10-01 20:40:27.170338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.107 [2024-10-01 20:40:27.170349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.107 [2024-10-01 20:40:27.174173] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.107 [2024-10-01 20:40:27.174217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.107 [2024-10-01 20:40:27.174227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.107 [2024-10-01 20:40:27.177530] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.107 [2024-10-01 20:40:27.177576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.107 [2024-10-01 20:40:27.177587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.107 [2024-10-01 20:40:27.181055] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.107 [2024-10-01 20:40:27.181100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.107 [2024-10-01 20:40:27.181111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.107 [2024-10-01 20:40:27.184414] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.107 [2024-10-01 20:40:27.184458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.107 [2024-10-01 20:40:27.184469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.107 [2024-10-01 20:40:27.188063] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.107 [2024-10-01 20:40:27.188096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.107 [2024-10-01 20:40:27.188120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.107 [2024-10-01 20:40:27.191663] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.107 [2024-10-01 20:40:27.191708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.107 [2024-10-01 20:40:27.191719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.107 [2024-10-01 20:40:27.195077] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.107 [2024-10-01 20:40:27.195132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.107 [2024-10-01 20:40:27.195145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.107 [2024-10-01 20:40:27.198900] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.107 [2024-10-01 20:40:27.198945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.107 [2024-10-01 20:40:27.198956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.107 [2024-10-01 20:40:27.202566] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.107 [2024-10-01 20:40:27.202612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.107 [2024-10-01 20:40:27.202623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.107 [2024-10-01 20:40:27.205503] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.107 [2024-10-01 20:40:27.205548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.107 [2024-10-01 20:40:27.205558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.107 [2024-10-01 20:40:27.209658] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.107 [2024-10-01 20:40:27.209704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.107 [2024-10-01 20:40:27.209715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.107 [2024-10-01 20:40:27.213008] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.107 [2024-10-01 20:40:27.213053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.107 [2024-10-01 20:40:27.213063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.107 [2024-10-01 20:40:27.216747] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.107 [2024-10-01 20:40:27.216793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.107 [2024-10-01 20:40:27.216804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.107 [2024-10-01 20:40:27.220473] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.107 [2024-10-01 20:40:27.220517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.107 [2024-10-01 20:40:27.220527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.107 [2024-10-01 20:40:27.224689] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.107 [2024-10-01 20:40:27.224735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.107 [2024-10-01 20:40:27.224746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.108 [2024-10-01 20:40:27.227630] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.108 [2024-10-01 20:40:27.227674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.108 [2024-10-01 20:40:27.227685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.108 [2024-10-01 20:40:27.231391] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.108 [2024-10-01 20:40:27.231436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.108 [2024-10-01 20:40:27.231448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.108 [2024-10-01 20:40:27.235350] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.108 [2024-10-01 20:40:27.235395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.108 [2024-10-01 20:40:27.235406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.108 [2024-10-01 20:40:27.238991] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.108 [2024-10-01 20:40:27.239036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.108 [2024-10-01 20:40:27.239047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.108 [2024-10-01 20:40:27.241567] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.108 [2024-10-01 20:40:27.241611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.108 [2024-10-01 20:40:27.241622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.108 [2024-10-01 20:40:27.245571] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.108 [2024-10-01 20:40:27.245616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.108 [2024-10-01 20:40:27.245627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.108 [2024-10-01 20:40:27.249852] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.108 [2024-10-01 20:40:27.249898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.108 [2024-10-01 20:40:27.249909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.108 [2024-10-01 20:40:27.253942] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.108 [2024-10-01 20:40:27.253988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.108 [2024-10-01 20:40:27.253998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.108 [2024-10-01 20:40:27.257809] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.108 [2024-10-01 20:40:27.257853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.108 [2024-10-01 20:40:27.257865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.108 [2024-10-01 20:40:27.261880] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.108 [2024-10-01 20:40:27.261925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.108 [2024-10-01 20:40:27.261935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.108 [2024-10-01 20:40:27.265910] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.108 [2024-10-01 20:40:27.265954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.108 [2024-10-01 20:40:27.265965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.108 [2024-10-01 20:40:27.269722] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.108 [2024-10-01 20:40:27.269766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.108 [2024-10-01 20:40:27.269776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.108 [2024-10-01 20:40:27.274005] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.108 [2024-10-01 20:40:27.274051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.108 [2024-10-01 20:40:27.274062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.108 [2024-10-01 20:40:27.277786] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.108 [2024-10-01 20:40:27.277831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.108 [2024-10-01 20:40:27.277841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.108 8265.00 IOPS, 1033.12 MiB/s [2024-10-01 20:40:27.281866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.108 [2024-10-01 20:40:27.281892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.108 [2024-10-01 20:40:27.281903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.108 [2024-10-01 20:40:27.286136] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.108 [2024-10-01 20:40:27.286180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.108 [2024-10-01 20:40:27.286191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.108 [2024-10-01 20:40:27.290391] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.108 [2024-10-01 20:40:27.290438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.108 [2024-10-01 20:40:27.290448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.108 [2024-10-01 20:40:27.294667] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.108 [2024-10-01 20:40:27.294711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.108 [2024-10-01 20:40:27.294722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.108 [2024-10-01 20:40:27.299243] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.108 [2024-10-01 20:40:27.299288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.108 [2024-10-01 20:40:27.299299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.108 [2024-10-01 20:40:27.302936] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.108 [2024-10-01 20:40:27.302981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.108 [2024-10-01 20:40:27.302991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.108 [2024-10-01 20:40:27.305913] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.108 [2024-10-01 20:40:27.305958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.108 [2024-10-01 20:40:27.305968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.108 [2024-10-01 20:40:27.310403] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.108 [2024-10-01 20:40:27.310434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.108 [2024-10-01 20:40:27.310445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.108 [2024-10-01 20:40:27.314193] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.109 [2024-10-01 20:40:27.314238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.109 [2024-10-01 20:40:27.314248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.109 [2024-10-01 20:40:27.318262] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.109 [2024-10-01 20:40:27.318309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.109 [2024-10-01 20:40:27.318319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.109 [2024-10-01 20:40:27.320681] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.109 [2024-10-01 20:40:27.320725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.109 [2024-10-01 20:40:27.320735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.109 [2024-10-01 20:40:27.324507] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.109 [2024-10-01 20:40:27.324552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.109 [2024-10-01 20:40:27.324562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.109 [2024-10-01 20:40:27.328528] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.109 [2024-10-01 20:40:27.328573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.109 [2024-10-01 20:40:27.328584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.109 [2024-10-01 20:40:27.331098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.109 [2024-10-01 20:40:27.331151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.109 [2024-10-01 20:40:27.331162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.109 [2024-10-01 20:40:27.334662] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.109 [2024-10-01 20:40:27.334707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.109 [2024-10-01 20:40:27.334718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.109 [2024-10-01 20:40:27.339076] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.109 [2024-10-01 20:40:27.339133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.109 [2024-10-01 20:40:27.339145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.109 [2024-10-01 20:40:27.343244] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.109 [2024-10-01 20:40:27.343290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.109 [2024-10-01 20:40:27.343301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.109 [2024-10-01 20:40:27.347244] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.109 [2024-10-01 20:40:27.347289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.109 [2024-10-01 20:40:27.347300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.109 [2024-10-01 20:40:27.349762] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.109 [2024-10-01 20:40:27.349806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.109 [2024-10-01 20:40:27.349816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.109 [2024-10-01 20:40:27.354560] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.109 [2024-10-01 20:40:27.354607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.109 [2024-10-01 20:40:27.354617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.370 [2024-10-01 20:40:27.358684] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.370 [2024-10-01 20:40:27.358731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.370 [2024-10-01 20:40:27.358742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.370 [2024-10-01 20:40:27.362144] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.370 [2024-10-01 20:40:27.362188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.370 [2024-10-01 20:40:27.362198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.370 [2024-10-01 20:40:27.365505] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.370 [2024-10-01 20:40:27.365568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.370 [2024-10-01 20:40:27.365578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.370 [2024-10-01 20:40:27.369098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.370 [2024-10-01 20:40:27.369152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.370 [2024-10-01 20:40:27.369163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.370 [2024-10-01 20:40:27.372744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.370 [2024-10-01 20:40:27.372790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.370 [2024-10-01 20:40:27.372800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.370 [2024-10-01 20:40:27.375870] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.370 [2024-10-01 20:40:27.375918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.370 [2024-10-01 20:40:27.375930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.370 [2024-10-01 20:40:27.379173] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.370 [2024-10-01 20:40:27.379218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.370 [2024-10-01 20:40:27.379228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.370 [2024-10-01 20:40:27.382956] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.370 [2024-10-01 20:40:27.383002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.370 [2024-10-01 20:40:27.383013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.370 [2024-10-01 20:40:27.386720] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.370 [2024-10-01 20:40:27.386766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.370 [2024-10-01 20:40:27.386776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.370 [2024-10-01 20:40:27.389853] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.370 [2024-10-01 20:40:27.389899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.370 [2024-10-01 20:40:27.389909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.370 [2024-10-01 20:40:27.393554] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.370 [2024-10-01 20:40:27.393600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.370 [2024-10-01 20:40:27.393610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.370 [2024-10-01 20:40:27.397063] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.370 [2024-10-01 20:40:27.397108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.370 [2024-10-01 20:40:27.397119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.370 [2024-10-01 20:40:27.400318] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.370 [2024-10-01 20:40:27.400362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.371 [2024-10-01 20:40:27.400372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.371 [2024-10-01 20:40:27.403774] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.371 [2024-10-01 20:40:27.403843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.371 [2024-10-01 20:40:27.403870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.371 [2024-10-01 20:40:27.407476] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.371 [2024-10-01 20:40:27.407522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.371 [2024-10-01 20:40:27.407532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.371 [2024-10-01 20:40:27.410262] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.371 [2024-10-01 20:40:27.410308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.371 [2024-10-01 20:40:27.410319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.371 [2024-10-01 20:40:27.414018] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.371 [2024-10-01 20:40:27.414063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.371 [2024-10-01 20:40:27.414073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.371 [2024-10-01 20:40:27.417294] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.371 [2024-10-01 20:40:27.417339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.371 [2024-10-01 20:40:27.417350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.371 [2024-10-01 20:40:27.421010] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.371 [2024-10-01 20:40:27.421057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.371 [2024-10-01 20:40:27.421069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.371 [2024-10-01 20:40:27.424812] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.371 [2024-10-01 20:40:27.424859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.371 [2024-10-01 20:40:27.424870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.371 [2024-10-01 20:40:27.428150] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.371 [2024-10-01 20:40:27.428193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.371 [2024-10-01 20:40:27.428220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.371 [2024-10-01 20:40:27.431739] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.371 [2024-10-01 20:40:27.431786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.371 [2024-10-01 20:40:27.431797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.371 [2024-10-01 20:40:27.435222] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.371 [2024-10-01 20:40:27.435269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.371 [2024-10-01 20:40:27.435280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.371 [2024-10-01 20:40:27.438500] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.371 [2024-10-01 20:40:27.438562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.371 [2024-10-01 20:40:27.438574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.371 [2024-10-01 20:40:27.441823] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.371 [2024-10-01 20:40:27.441870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.371 [2024-10-01 20:40:27.441881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.371 [2024-10-01 20:40:27.445700] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.371 [2024-10-01 20:40:27.445748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.371 [2024-10-01 20:40:27.445759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.371 [2024-10-01 20:40:27.448766] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.371 [2024-10-01 20:40:27.448814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.371 [2024-10-01 20:40:27.448825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.371 [2024-10-01 20:40:27.452740] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.371 [2024-10-01 20:40:27.452786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.371 [2024-10-01 20:40:27.452797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.371 [2024-10-01 20:40:27.456885] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.371 [2024-10-01 20:40:27.456930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.371 [2024-10-01 20:40:27.456941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.371 [2024-10-01 20:40:27.459786] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.371 [2024-10-01 20:40:27.459855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.371 [2024-10-01 20:40:27.459866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.371 [2024-10-01 20:40:27.463680] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.371 [2024-10-01 20:40:27.463726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.371 [2024-10-01 20:40:27.463736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.371 [2024-10-01 20:40:27.466744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.371 [2024-10-01 20:40:27.466789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.371 [2024-10-01 20:40:27.466800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.371 [2024-10-01 20:40:27.470354] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.371 [2024-10-01 20:40:27.470401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.371 [2024-10-01 20:40:27.470412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.371 [2024-10-01 20:40:27.473741] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.371 [2024-10-01 20:40:27.473787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.371 [2024-10-01 20:40:27.473797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.371 [2024-10-01 20:40:27.476873] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.371 [2024-10-01 20:40:27.476918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.371 [2024-10-01 20:40:27.476928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.372 [2024-10-01 20:40:27.480626] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.372 [2024-10-01 20:40:27.480671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.372 [2024-10-01 20:40:27.480682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.372 [2024-10-01 20:40:27.484261] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.372 [2024-10-01 20:40:27.484306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.372 [2024-10-01 20:40:27.484316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.372 [2024-10-01 20:40:27.487856] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.372 [2024-10-01 20:40:27.487909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.372 [2024-10-01 20:40:27.487920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.372 [2024-10-01 20:40:27.491214] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.372 [2024-10-01 20:40:27.491258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.372 [2024-10-01 20:40:27.491269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.372 [2024-10-01 20:40:27.494738] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.372 [2024-10-01 20:40:27.494783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.372 [2024-10-01 20:40:27.494794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.372 [2024-10-01 20:40:27.498327] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.372 [2024-10-01 20:40:27.498372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.372 [2024-10-01 20:40:27.498383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.372 [2024-10-01 20:40:27.501269] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.372 [2024-10-01 20:40:27.501314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.372 [2024-10-01 20:40:27.501325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.372 [2024-10-01 20:40:27.504918] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.372 [2024-10-01 20:40:27.504963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.372 [2024-10-01 20:40:27.504973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.372 [2024-10-01 20:40:27.508118] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.372 [2024-10-01 20:40:27.508206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.372 [2024-10-01 20:40:27.508217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.372 [2024-10-01 20:40:27.511405] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.372 [2024-10-01 20:40:27.511450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.372 [2024-10-01 20:40:27.511461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.372 [2024-10-01 20:40:27.514691] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.372 [2024-10-01 20:40:27.514737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.372 [2024-10-01 20:40:27.514747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.372 [2024-10-01 20:40:27.517692] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.372 [2024-10-01 20:40:27.517738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.372 [2024-10-01 20:40:27.517749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.372 [2024-10-01 20:40:27.521473] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.372 [2024-10-01 20:40:27.521520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.372 [2024-10-01 20:40:27.521547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.372 [2024-10-01 20:40:27.525865] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.372 [2024-10-01 20:40:27.525911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.372 [2024-10-01 20:40:27.525922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.372 [2024-10-01 20:40:27.530343] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.372 [2024-10-01 20:40:27.530388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.372 [2024-10-01 20:40:27.530399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.372 [2024-10-01 20:40:27.533324] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.372 [2024-10-01 20:40:27.533370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.372 [2024-10-01 20:40:27.533380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.372 [2024-10-01 20:40:27.537183] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.372 [2024-10-01 20:40:27.537227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.372 [2024-10-01 20:40:27.537238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.372 [2024-10-01 20:40:27.541480] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.372 [2024-10-01 20:40:27.541527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.372 [2024-10-01 20:40:27.541554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.372 [2024-10-01 20:40:27.545837] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.372 [2024-10-01 20:40:27.545883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.372 [2024-10-01 20:40:27.545894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.372 [2024-10-01 20:40:27.549355] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.372 [2024-10-01 20:40:27.549402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.372 [2024-10-01 20:40:27.549413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.372 [2024-10-01 20:40:27.552310] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.372 [2024-10-01 20:40:27.552355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.372 [2024-10-01 20:40:27.552366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.372 [2024-10-01 20:40:27.556233] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.372 [2024-10-01 20:40:27.556295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.372 [2024-10-01 20:40:27.556305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.372 [2024-10-01 20:40:27.559668] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.372 [2024-10-01 20:40:27.559714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.373 [2024-10-01 20:40:27.559724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.373 [2024-10-01 20:40:27.562808] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.373 [2024-10-01 20:40:27.562853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.373 [2024-10-01 20:40:27.562864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.373 [2024-10-01 20:40:27.566447] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.373 [2024-10-01 20:40:27.566492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.373 [2024-10-01 20:40:27.566503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.373 [2024-10-01 20:40:27.570135] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.373 [2024-10-01 20:40:27.570179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.373 [2024-10-01 20:40:27.570189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.373 [2024-10-01 20:40:27.573798] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.373 [2024-10-01 20:40:27.573842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.373 [2024-10-01 20:40:27.573853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.373 [2024-10-01 20:40:27.577293] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.373 [2024-10-01 20:40:27.577340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.373 [2024-10-01 20:40:27.577351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.373 [2024-10-01 20:40:27.581014] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.373 [2024-10-01 20:40:27.581060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.373 [2024-10-01 20:40:27.581071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.373 [2024-10-01 20:40:27.583933] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.373 [2024-10-01 20:40:27.583981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.373 [2024-10-01 20:40:27.583993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.373 [2024-10-01 20:40:27.587622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.373 [2024-10-01 20:40:27.587668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.373 [2024-10-01 20:40:27.587679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.373 [2024-10-01 20:40:27.590748] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.373 [2024-10-01 20:40:27.590794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.373 [2024-10-01 20:40:27.590805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.373 [2024-10-01 20:40:27.594098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.373 [2024-10-01 20:40:27.594168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.373 [2024-10-01 20:40:27.594179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.373 [2024-10-01 20:40:27.597360] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.373 [2024-10-01 20:40:27.597406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.373 [2024-10-01 20:40:27.597416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.373 [2024-10-01 20:40:27.600752] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.373 [2024-10-01 20:40:27.600797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.373 [2024-10-01 20:40:27.600808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.373 [2024-10-01 20:40:27.604999] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.373 [2024-10-01 20:40:27.605045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.373 [2024-10-01 20:40:27.605056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.373 [2024-10-01 20:40:27.608984] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.373 [2024-10-01 20:40:27.609029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.373 [2024-10-01 20:40:27.609040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.373 [2024-10-01 20:40:27.611363] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.373 [2024-10-01 20:40:27.611408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.373 [2024-10-01 20:40:27.611419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.373 [2024-10-01 20:40:27.615937] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.373 [2024-10-01 20:40:27.615986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.373 [2024-10-01 20:40:27.615998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.373 [2024-10-01 20:40:27.619229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.373 [2024-10-01 20:40:27.619293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.373 [2024-10-01 20:40:27.619319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.634 [2024-10-01 20:40:27.623235] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.634 [2024-10-01 20:40:27.623281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.634 [2024-10-01 20:40:27.623292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.634 [2024-10-01 20:40:27.627023] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.634 [2024-10-01 20:40:27.627069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.635 [2024-10-01 20:40:27.627079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.635 [2024-10-01 20:40:27.631353] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.635 [2024-10-01 20:40:27.631401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.635 [2024-10-01 20:40:27.631412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.635 [2024-10-01 20:40:27.634161] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.635 [2024-10-01 20:40:27.634205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.635 [2024-10-01 20:40:27.634216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.635 [2024-10-01 20:40:27.637862] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.635 [2024-10-01 20:40:27.637909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.635 [2024-10-01 20:40:27.637920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.635 [2024-10-01 20:40:27.642020] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.635 [2024-10-01 20:40:27.642066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.635 [2024-10-01 20:40:27.642077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.635 [2024-10-01 20:40:27.646113] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.635 [2024-10-01 20:40:27.646169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.635 [2024-10-01 20:40:27.646180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.635 [2024-10-01 20:40:27.649043] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.635 [2024-10-01 20:40:27.649089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.635 [2024-10-01 20:40:27.649099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.635 [2024-10-01 20:40:27.653379] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.635 [2024-10-01 20:40:27.653426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.635 [2024-10-01 20:40:27.653437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.635 [2024-10-01 20:40:27.657820] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.635 [2024-10-01 20:40:27.657866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.635 [2024-10-01 20:40:27.657877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.635 [2024-10-01 20:40:27.662206] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.635 [2024-10-01 20:40:27.662252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.635 [2024-10-01 20:40:27.662263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.635 [2024-10-01 20:40:27.665172] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.635 [2024-10-01 20:40:27.665216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.635 [2024-10-01 20:40:27.665228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.635 [2024-10-01 20:40:27.668763] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.635 [2024-10-01 20:40:27.668808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.635 [2024-10-01 20:40:27.668819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.635 [2024-10-01 20:40:27.672692] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.635 [2024-10-01 20:40:27.672739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.635 [2024-10-01 20:40:27.672749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.635 [2024-10-01 20:40:27.676055] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.635 [2024-10-01 20:40:27.676102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.635 [2024-10-01 20:40:27.676114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.635 [2024-10-01 20:40:27.679408] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.635 [2024-10-01 20:40:27.679455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.635 [2024-10-01 20:40:27.679467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.635 [2024-10-01 20:40:27.682741] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.635 [2024-10-01 20:40:27.682787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.635 [2024-10-01 20:40:27.682797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.635 [2024-10-01 20:40:27.686200] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.635 [2024-10-01 20:40:27.686244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.635 [2024-10-01 20:40:27.686255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.635 [2024-10-01 20:40:27.689996] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.635 [2024-10-01 20:40:27.690042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.635 [2024-10-01 20:40:27.690053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.635 [2024-10-01 20:40:27.693661] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.635 [2024-10-01 20:40:27.693707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.635 [2024-10-01 20:40:27.693717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.635 [2024-10-01 20:40:27.697641] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.635 [2024-10-01 20:40:27.697687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.635 [2024-10-01 20:40:27.697698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.635 [2024-10-01 20:40:27.701050] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.635 [2024-10-01 20:40:27.701095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.635 [2024-10-01 20:40:27.701105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.635 [2024-10-01 20:40:27.704991] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.635 [2024-10-01 20:40:27.705037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.635 [2024-10-01 20:40:27.705048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.635 [2024-10-01 20:40:27.708985] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.636 [2024-10-01 20:40:27.709031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.636 [2024-10-01 20:40:27.709041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.636 [2024-10-01 20:40:27.711531] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.636 [2024-10-01 20:40:27.711575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.636 [2024-10-01 20:40:27.711585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.636 [2024-10-01 20:40:27.715763] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.636 [2024-10-01 20:40:27.715832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.636 [2024-10-01 20:40:27.715860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.636 [2024-10-01 20:40:27.720038] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.636 [2024-10-01 20:40:27.720072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.636 [2024-10-01 20:40:27.720083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.636 [2024-10-01 20:40:27.723793] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.636 [2024-10-01 20:40:27.723862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.636 [2024-10-01 20:40:27.723875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.636 [2024-10-01 20:40:27.726979] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.636 [2024-10-01 20:40:27.727024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.636 [2024-10-01 20:40:27.727035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.636 [2024-10-01 20:40:27.730850] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.636 [2024-10-01 20:40:27.730896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.636 [2024-10-01 20:40:27.730907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.636 [2024-10-01 20:40:27.735039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.636 [2024-10-01 20:40:27.735085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.636 [2024-10-01 20:40:27.735096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.636 [2024-10-01 20:40:27.739594] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.636 [2024-10-01 20:40:27.739640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.636 [2024-10-01 20:40:27.739651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.636 [2024-10-01 20:40:27.743618] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.636 [2024-10-01 20:40:27.743664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.636 [2024-10-01 20:40:27.743674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.636 [2024-10-01 20:40:27.746085] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.636 [2024-10-01 20:40:27.746139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.636 [2024-10-01 20:40:27.746151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.636 [2024-10-01 20:40:27.750093] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.636 [2024-10-01 20:40:27.750150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.636 [2024-10-01 20:40:27.750161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.636 [2024-10-01 20:40:27.753784] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.636 [2024-10-01 20:40:27.753831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.636 [2024-10-01 20:40:27.753842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.636 [2024-10-01 20:40:27.757932] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.636 [2024-10-01 20:40:27.757978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.636 [2024-10-01 20:40:27.757989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.636 [2024-10-01 20:40:27.760523] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.636 [2024-10-01 20:40:27.760568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.636 [2024-10-01 20:40:27.760578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.636 [2024-10-01 20:40:27.764522] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.636 [2024-10-01 20:40:27.764567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.636 [2024-10-01 20:40:27.764578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.636 [2024-10-01 20:40:27.767564] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.636 [2024-10-01 20:40:27.767609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.636 [2024-10-01 20:40:27.767620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.636 [2024-10-01 20:40:27.771706] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.636 [2024-10-01 20:40:27.771752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.636 [2024-10-01 20:40:27.771763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.636 [2024-10-01 20:40:27.776203] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.636 [2024-10-01 20:40:27.776247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.636 [2024-10-01 20:40:27.776258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.636 [2024-10-01 20:40:27.780059] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.636 [2024-10-01 20:40:27.780091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.636 [2024-10-01 20:40:27.780103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.636 [2024-10-01 20:40:27.782703] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.636 [2024-10-01 20:40:27.782749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.636 [2024-10-01 20:40:27.782760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.636 [2024-10-01 20:40:27.786866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.636 [2024-10-01 20:40:27.786913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.636 [2024-10-01 20:40:27.786925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.636 [2024-10-01 20:40:27.790863] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.636 [2024-10-01 20:40:27.790911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.636 [2024-10-01 20:40:27.790922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.637 [2024-10-01 20:40:27.795235] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.637 [2024-10-01 20:40:27.795282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.637 [2024-10-01 20:40:27.795294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.637 [2024-10-01 20:40:27.799278] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.637 [2024-10-01 20:40:27.799325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.637 [2024-10-01 20:40:27.799337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.637 [2024-10-01 20:40:27.801869] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.637 [2024-10-01 20:40:27.801915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.637 [2024-10-01 20:40:27.801925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.637 [2024-10-01 20:40:27.806335] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.637 [2024-10-01 20:40:27.806381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.637 [2024-10-01 20:40:27.806392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.637 [2024-10-01 20:40:27.809160] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.637 [2024-10-01 20:40:27.809205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.637 [2024-10-01 20:40:27.809216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.637 [2024-10-01 20:40:27.812957] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.637 [2024-10-01 20:40:27.813004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.637 [2024-10-01 20:40:27.813014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.637 [2024-10-01 20:40:27.816453] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.637 [2024-10-01 20:40:27.816499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.637 [2024-10-01 20:40:27.816510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.637 [2024-10-01 20:40:27.819888] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.637 [2024-10-01 20:40:27.819935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.637 [2024-10-01 20:40:27.819946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.637 [2024-10-01 20:40:27.823098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.637 [2024-10-01 20:40:27.823153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.637 [2024-10-01 20:40:27.823164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.637 [2024-10-01 20:40:27.826919] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.637 [2024-10-01 20:40:27.826965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.637 [2024-10-01 20:40:27.826976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.637 [2024-10-01 20:40:27.831341] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.637 [2024-10-01 20:40:27.831387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.637 [2024-10-01 20:40:27.831398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.637 [2024-10-01 20:40:27.834947] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.637 [2024-10-01 20:40:27.834992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.637 [2024-10-01 20:40:27.835003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.637 [2024-10-01 20:40:27.837826] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.637 [2024-10-01 20:40:27.837872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.637 [2024-10-01 20:40:27.837883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.637 [2024-10-01 20:40:27.841671] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.637 [2024-10-01 20:40:27.841717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.637 [2024-10-01 20:40:27.841728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.637 [2024-10-01 20:40:27.845067] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.637 [2024-10-01 20:40:27.845112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.637 [2024-10-01 20:40:27.845134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.637 [2024-10-01 20:40:27.847782] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.637 [2024-10-01 20:40:27.847850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.637 [2024-10-01 20:40:27.847861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.637 [2024-10-01 20:40:27.851641] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.637 [2024-10-01 20:40:27.851686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.637 [2024-10-01 20:40:27.851697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.637 [2024-10-01 20:40:27.854798] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.637 [2024-10-01 20:40:27.854844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.637 [2024-10-01 20:40:27.854854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.637 [2024-10-01 20:40:27.858673] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.637 [2024-10-01 20:40:27.858721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.637 [2024-10-01 20:40:27.858732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.637 [2024-10-01 20:40:27.861899] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.637 [2024-10-01 20:40:27.861945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.637 [2024-10-01 20:40:27.861955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.637 [2024-10-01 20:40:27.865783] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.637 [2024-10-01 20:40:27.865828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.637 [2024-10-01 20:40:27.865839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.637 [2024-10-01 20:40:27.869664] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.637 [2024-10-01 20:40:27.869709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.637 [2024-10-01 20:40:27.869720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.638 [2024-10-01 20:40:27.873037] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.638 [2024-10-01 20:40:27.873083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.638 [2024-10-01 20:40:27.873094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.638 [2024-10-01 20:40:27.876315] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.638 [2024-10-01 20:40:27.876362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.638 [2024-10-01 20:40:27.876373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.638 [2024-10-01 20:40:27.879442] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.638 [2024-10-01 20:40:27.879488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.638 [2024-10-01 20:40:27.879499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.638 [2024-10-01 20:40:27.883465] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.638 [2024-10-01 20:40:27.883512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.638 [2024-10-01 20:40:27.883523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.899 [2024-10-01 20:40:27.887324] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.899 [2024-10-01 20:40:27.887371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.899 [2024-10-01 20:40:27.887397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.899 [2024-10-01 20:40:27.890598] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.899 [2024-10-01 20:40:27.890643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.899 [2024-10-01 20:40:27.890653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.899 [2024-10-01 20:40:27.894448] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.899 [2024-10-01 20:40:27.894493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.899 [2024-10-01 20:40:27.894504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.899 [2024-10-01 20:40:27.898017] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.899 [2024-10-01 20:40:27.898063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.899 [2024-10-01 20:40:27.898074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.899 [2024-10-01 20:40:27.901482] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.899 [2024-10-01 20:40:27.901528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.899 [2024-10-01 20:40:27.901539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.899 [2024-10-01 20:40:27.904915] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.899 [2024-10-01 20:40:27.904960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.899 [2024-10-01 20:40:27.904972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.899 [2024-10-01 20:40:27.908564] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.899 [2024-10-01 20:40:27.908609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.899 [2024-10-01 20:40:27.908620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.899 [2024-10-01 20:40:27.911959] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.899 [2024-10-01 20:40:27.912009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.899 [2024-10-01 20:40:27.912021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.899 [2024-10-01 20:40:27.915417] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.899 [2024-10-01 20:40:27.915462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.899 [2024-10-01 20:40:27.915472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.899 [2024-10-01 20:40:27.918425] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.899 [2024-10-01 20:40:27.918471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.899 [2024-10-01 20:40:27.918482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.899 [2024-10-01 20:40:27.922162] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.899 [2024-10-01 20:40:27.922223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.899 [2024-10-01 20:40:27.922234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.899 [2024-10-01 20:40:27.925847] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.899 [2024-10-01 20:40:27.925892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.899 [2024-10-01 20:40:27.925903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.899 [2024-10-01 20:40:27.928788] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.899 [2024-10-01 20:40:27.928833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.899 [2024-10-01 20:40:27.928844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.899 [2024-10-01 20:40:27.933073] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.899 [2024-10-01 20:40:27.933119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.899 [2024-10-01 20:40:27.933156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.899 [2024-10-01 20:40:27.937340] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.899 [2024-10-01 20:40:27.937386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.899 [2024-10-01 20:40:27.937398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.899 [2024-10-01 20:40:27.941548] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.899 [2024-10-01 20:40:27.941594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.899 [2024-10-01 20:40:27.941605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.899 [2024-10-01 20:40:27.945197] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.899 [2024-10-01 20:40:27.945242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.899 [2024-10-01 20:40:27.945253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.899 [2024-10-01 20:40:27.948111] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.899 [2024-10-01 20:40:27.948182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.899 [2024-10-01 20:40:27.948210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.899 [2024-10-01 20:40:27.952185] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.899 [2024-10-01 20:40:27.952246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.899 [2024-10-01 20:40:27.952257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.899 [2024-10-01 20:40:27.956464] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.900 [2024-10-01 20:40:27.956509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.900 [2024-10-01 20:40:27.956520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.900 [2024-10-01 20:40:27.959232] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.900 [2024-10-01 20:40:27.959280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.900 [2024-10-01 20:40:27.959291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.900 [2024-10-01 20:40:27.963079] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.900 [2024-10-01 20:40:27.963148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.900 [2024-10-01 20:40:27.963161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.900 [2024-10-01 20:40:27.968198] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.900 [2024-10-01 20:40:27.968248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.900 [2024-10-01 20:40:27.968260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.900 [2024-10-01 20:40:27.973091] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.900 [2024-10-01 20:40:27.973147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.900 [2024-10-01 20:40:27.973159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.900 [2024-10-01 20:40:27.977859] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.900 [2024-10-01 20:40:27.977908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.900 [2024-10-01 20:40:27.977919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.900 [2024-10-01 20:40:27.981210] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.900 [2024-10-01 20:40:27.981270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.900 [2024-10-01 20:40:27.981283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.900 [2024-10-01 20:40:27.985875] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.900 [2024-10-01 20:40:27.985921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.900 [2024-10-01 20:40:27.985933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.900 [2024-10-01 20:40:27.990660] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.900 [2024-10-01 20:40:27.990706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.900 [2024-10-01 20:40:27.990717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.900 [2024-10-01 20:40:27.994978] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.900 [2024-10-01 20:40:27.995022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.900 [2024-10-01 20:40:27.995033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.900 [2024-10-01 20:40:27.997590] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.900 [2024-10-01 20:40:27.997635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.900 [2024-10-01 20:40:27.997645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.900 [2024-10-01 20:40:28.001872] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.900 [2024-10-01 20:40:28.001919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.900 [2024-10-01 20:40:28.001929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.900 [2024-10-01 20:40:28.005856] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.900 [2024-10-01 20:40:28.005903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.900 [2024-10-01 20:40:28.005914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.900 [2024-10-01 20:40:28.008865] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.900 [2024-10-01 20:40:28.008910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.900 [2024-10-01 20:40:28.008920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.900 [2024-10-01 20:40:28.012634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.900 [2024-10-01 20:40:28.012697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.900 [2024-10-01 20:40:28.012708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.900 [2024-10-01 20:40:28.016568] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.900 [2024-10-01 20:40:28.016614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.900 [2024-10-01 20:40:28.016625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.900 [2024-10-01 20:40:28.019590] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.900 [2024-10-01 20:40:28.019635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.900 [2024-10-01 20:40:28.019646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.900 [2024-10-01 20:40:28.023231] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.900 [2024-10-01 20:40:28.023277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.900 [2024-10-01 20:40:28.023288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.900 [2024-10-01 20:40:28.026844] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.900 [2024-10-01 20:40:28.026891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.900 [2024-10-01 20:40:28.026902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.900 [2024-10-01 20:40:28.030512] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.900 [2024-10-01 20:40:28.030574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.900 [2024-10-01 20:40:28.030585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.900 [2024-10-01 20:40:28.034567] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.900 [2024-10-01 20:40:28.034615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.900 [2024-10-01 20:40:28.034627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.900 [2024-10-01 20:40:28.038356] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.900 [2024-10-01 20:40:28.038407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.900 [2024-10-01 20:40:28.038419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.900 [2024-10-01 20:40:28.042285] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.900 [2024-10-01 20:40:28.042335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.900 [2024-10-01 20:40:28.042347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.900 [2024-10-01 20:40:28.046724] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.901 [2024-10-01 20:40:28.046771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.901 [2024-10-01 20:40:28.046781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.901 [2024-10-01 20:40:28.050555] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.901 [2024-10-01 20:40:28.050601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.901 [2024-10-01 20:40:28.050612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.901 [2024-10-01 20:40:28.054664] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.901 [2024-10-01 20:40:28.054711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.901 [2024-10-01 20:40:28.054722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.901 [2024-10-01 20:40:28.058874] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.901 [2024-10-01 20:40:28.058921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.901 [2024-10-01 20:40:28.058932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.901 [2024-10-01 20:40:28.062744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.901 [2024-10-01 20:40:28.062791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.901 [2024-10-01 20:40:28.062802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.901 [2024-10-01 20:40:28.065962] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.901 [2024-10-01 20:40:28.066009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.901 [2024-10-01 20:40:28.066020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.901 [2024-10-01 20:40:28.070395] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.901 [2024-10-01 20:40:28.070444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.901 [2024-10-01 20:40:28.070456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.901 [2024-10-01 20:40:28.074311] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.901 [2024-10-01 20:40:28.074357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.901 [2024-10-01 20:40:28.074369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.901 [2024-10-01 20:40:28.078424] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.901 [2024-10-01 20:40:28.078471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.901 [2024-10-01 20:40:28.078482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.901 [2024-10-01 20:40:28.082584] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.901 [2024-10-01 20:40:28.082631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.901 [2024-10-01 20:40:28.082643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.901 [2024-10-01 20:40:28.085218] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.901 [2024-10-01 20:40:28.085262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.901 [2024-10-01 20:40:28.085272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.901 [2024-10-01 20:40:28.089422] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.901 [2024-10-01 20:40:28.089473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.901 [2024-10-01 20:40:28.089499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.901 [2024-10-01 20:40:28.093938] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.901 [2024-10-01 20:40:28.093985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.901 [2024-10-01 20:40:28.093996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.901 [2024-10-01 20:40:28.098586] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.901 [2024-10-01 20:40:28.098633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.901 [2024-10-01 20:40:28.098644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.901 [2024-10-01 20:40:28.101875] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.901 [2024-10-01 20:40:28.101920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.901 [2024-10-01 20:40:28.101931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.901 [2024-10-01 20:40:28.105864] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.901 [2024-10-01 20:40:28.105912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.901 [2024-10-01 20:40:28.105923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.901 [2024-10-01 20:40:28.110188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.901 [2024-10-01 20:40:28.110235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.901 [2024-10-01 20:40:28.110246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.901 [2024-10-01 20:40:28.113439] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.901 [2024-10-01 20:40:28.113485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.901 [2024-10-01 20:40:28.113496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.901 [2024-10-01 20:40:28.117314] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.901 [2024-10-01 20:40:28.117360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.901 [2024-10-01 20:40:28.117372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.901 [2024-10-01 20:40:28.121701] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.901 [2024-10-01 20:40:28.121749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.901 [2024-10-01 20:40:28.121760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.901 [2024-10-01 20:40:28.124706] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.901 [2024-10-01 20:40:28.124752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.901 [2024-10-01 20:40:28.124763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.901 [2024-10-01 20:40:28.128648] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.901 [2024-10-01 20:40:28.128696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.901 [2024-10-01 20:40:28.128707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.901 [2024-10-01 20:40:28.132117] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.901 [2024-10-01 20:40:28.132159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.901 [2024-10-01 20:40:28.132170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:32.901 [2024-10-01 20:40:28.135643] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.902 [2024-10-01 20:40:28.135689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.902 [2024-10-01 20:40:28.135700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:32.902 [2024-10-01 20:40:28.139286] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.902 [2024-10-01 20:40:28.139333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.902 [2024-10-01 20:40:28.139344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:32.902 [2024-10-01 20:40:28.142977] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.902 [2024-10-01 20:40:28.143024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.902 [2024-10-01 20:40:28.143035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:32.902 [2024-10-01 20:40:28.145991] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:32.902 [2024-10-01 20:40:28.146038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:32.902 [2024-10-01 20:40:28.146050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:33.162 [2024-10-01 20:40:28.150790] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:33.162 [2024-10-01 20:40:28.150855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:33.162 [2024-10-01 20:40:28.150868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:33.162 [2024-10-01 20:40:28.153876] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:33.162 [2024-10-01 20:40:28.153922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:33.162 [2024-10-01 20:40:28.153933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:33.162 [2024-10-01 20:40:28.158154] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:33.162 [2024-10-01 20:40:28.158211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:33.162 [2024-10-01 20:40:28.158223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:33.162 [2024-10-01 20:40:28.162740] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:33.162 [2024-10-01 20:40:28.162787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:33.162 [2024-10-01 20:40:28.162798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:33.162 [2024-10-01 20:40:28.167474] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:33.162 [2024-10-01 20:40:28.167538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:33.162 [2024-10-01 20:40:28.167550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:33.162 [2024-10-01 20:40:28.170799] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:33.162 [2024-10-01 20:40:28.170845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:33.162 [2024-10-01 20:40:28.170855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:33.162 [2024-10-01 20:40:28.174389] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:33.162 [2024-10-01 20:40:28.174436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:33.162 [2024-10-01 20:40:28.174448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:33.162 [2024-10-01 20:40:28.178471] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:33.162 [2024-10-01 20:40:28.178517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:33.162 [2024-10-01 20:40:28.178528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:33.162 [2024-10-01 20:40:28.183015] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:33.162 [2024-10-01 20:40:28.183062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:33.162 [2024-10-01 20:40:28.183073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:33.162 [2024-10-01 20:40:28.187494] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:33.162 [2024-10-01 20:40:28.187558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:33.162 [2024-10-01 20:40:28.187569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:33.162 [2024-10-01 20:40:28.191251] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:33.162 [2024-10-01 20:40:28.191296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:33.162 [2024-10-01 20:40:28.191306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:33.162 [2024-10-01 20:40:28.195220] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:33.162 [2024-10-01 20:40:28.195268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:33.162 [2024-10-01 20:40:28.195280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:33.162 [2024-10-01 20:40:28.197912] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:33.162 [2024-10-01 20:40:28.197958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:33.162 [2024-10-01 20:40:28.197969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:33.162 [2024-10-01 20:40:28.202212] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:33.162 [2024-10-01 20:40:28.202258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:33.162 [2024-10-01 20:40:28.202269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:33.162 [2024-10-01 20:40:28.206654] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:33.162 [2024-10-01 20:40:28.206702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:33.162 [2024-10-01 20:40:28.206713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:33.162 [2024-10-01 20:40:28.211186] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:33.162 [2024-10-01 20:40:28.211232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:33.162 [2024-10-01 20:40:28.211244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:33.162 [2024-10-01 20:40:28.214438] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:33.162 [2024-10-01 20:40:28.214483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:33.162 [2024-10-01 20:40:28.214494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:33.162 [2024-10-01 20:40:28.218205] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:33.163 [2024-10-01 20:40:28.218253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:33.163 [2024-10-01 20:40:28.218264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:33.163 [2024-10-01 20:40:28.222676] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:33.163 [2024-10-01 20:40:28.222725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:33.163 [2024-10-01 20:40:28.222736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:33.163 [2024-10-01 20:40:28.225815] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:33.163 [2024-10-01 20:40:28.225862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:33.163 [2024-10-01 20:40:28.225872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:33.163 [2024-10-01 20:40:28.229567] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:33.163 [2024-10-01 20:40:28.229615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:33.163 [2024-10-01 20:40:28.229626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:33.163 [2024-10-01 20:40:28.232753] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:33.163 [2024-10-01 20:40:28.232799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:33.163 [2024-10-01 20:40:28.232810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:33.163 [2024-10-01 20:40:28.236702] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:33.163 [2024-10-01 20:40:28.236750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:33.163 [2024-10-01 20:40:28.236761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:33.163 [2024-10-01 20:40:28.240547] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:33.163 [2024-10-01 20:40:28.240594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:33.163 [2024-10-01 20:40:28.240605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:33.163 [2024-10-01 20:40:28.244252] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:33.163 [2024-10-01 20:40:28.244298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:33.163 [2024-10-01 20:40:28.244309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:33.163 [2024-10-01 20:40:28.247750] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:33.163 [2024-10-01 20:40:28.247794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:33.163 [2024-10-01 20:40:28.247829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:33.163 [2024-10-01 20:40:28.251255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:33.163 [2024-10-01 20:40:28.251300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:33.163 [2024-10-01 20:40:28.251310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:33.163 [2024-10-01 20:40:28.254705] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:33.163 [2024-10-01 20:40:28.254752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:33.163 [2024-10-01 20:40:28.254762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:33.163 [2024-10-01 20:40:28.258192] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:33.163 [2024-10-01 20:40:28.258238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:33.163 [2024-10-01 20:40:28.258248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:33.163 [2024-10-01 20:40:28.261285] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:33.163 [2024-10-01 20:40:28.261330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:33.163 [2024-10-01 20:40:28.261341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:33.163 [2024-10-01 20:40:28.265570] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:33.163 [2024-10-01 20:40:28.265616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:33.163 [2024-10-01 20:40:28.265626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:33.163 [2024-10-01 20:40:28.269863] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:33.163 [2024-10-01 20:40:28.269909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:33.163 [2024-10-01 20:40:28.269920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:33.163 [2024-10-01 20:40:28.273989] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:33.163 [2024-10-01 20:40:28.274035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:33.163 [2024-10-01 20:40:28.274046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:33.163 [2024-10-01 20:40:28.276932] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:33.163 [2024-10-01 20:40:28.276978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:33.163 [2024-10-01 20:40:28.276989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:33.163 8317.00 IOPS, 1039.62 MiB/s [2024-10-01 20:40:28.282245] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1757d80) 00:41:33.163 [2024-10-01 20:40:28.282278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:33.163 [2024-10-01 20:40:28.282291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:33.163 00:41:33.163 Latency(us) 00:41:33.163 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:33.163 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:41:33.163 nvme0n1 : 2.00 8314.94 1039.37 0.00 0.00 1920.22 506.41 7596.22 00:41:33.163 =================================================================================================================== 00:41:33.163 Total : 8314.94 1039.37 0.00 0.00 1920.22 506.41 7596.22 00:41:33.163 { 00:41:33.163 "results": [ 00:41:33.163 { 00:41:33.163 "job": "nvme0n1", 00:41:33.163 "core_mask": "0x2", 00:41:33.163 "workload": "randread", 00:41:33.163 "status": "finished", 00:41:33.163 "queue_depth": 16, 00:41:33.163 "io_size": 131072, 00:41:33.163 "runtime": 2.002419, 00:41:33.163 "iops": 8314.943076349156, 00:41:33.163 "mibps": 1039.3678845436445, 00:41:33.163 "io_failed": 0, 00:41:33.163 "io_timeout": 0, 00:41:33.163 "avg_latency_us": 1920.2244804804805, 00:41:33.163 "min_latency_us": 506.4145454545455, 00:41:33.163 "max_latency_us": 7596.218181818182 00:41:33.163 } 00:41:33.163 ], 00:41:33.163 "core_count": 1 00:41:33.163 } 00:41:33.163 20:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:41:33.163 20:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:41:33.163 20:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:41:33.163 | .driver_specific 00:41:33.163 | .nvme_error 00:41:33.163 | .status_code 00:41:33.163 | .command_transient_transport_error' 00:41:33.163 20:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:41:33.422 20:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 537 > 0 )) 00:41:33.422 20:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94035 00:41:33.422 20:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 94035 ']' 00:41:33.422 20:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 94035 00:41:33.422 20:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:41:33.422 20:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:33.422 20:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94035 00:41:33.422 20:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:41:33.422 killing process with pid 94035 00:41:33.422 20:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:41:33.423 20:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94035' 00:41:33.423 Received shutdown signal, test time was about 2.000000 seconds 00:41:33.423 00:41:33.423 Latency(us) 00:41:33.423 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:33.423 =================================================================================================================== 00:41:33.423 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:33.423 20:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 94035 00:41:33.423 20:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 94035 00:41:33.682 20:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:41:33.682 20:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:41:33.682 20:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:41:33.682 20:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:41:33.682 20:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:41:33.682 20:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94121 00:41:33.682 20:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94121 /var/tmp/bperf.sock 00:41:33.682 20:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:41:33.682 20:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 94121 ']' 00:41:33.682 20:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:33.682 20:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:33.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:33.682 20:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:33.682 20:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:33.682 20:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:41:33.682 [2024-10-01 20:40:28.841930] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:41:33.682 [2024-10-01 20:40:28.842015] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94121 ] 00:41:33.948 [2024-10-01 20:40:28.974595] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:33.948 [2024-10-01 20:40:29.029059] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:41:33.948 20:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:33.948 20:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:41:33.948 20:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:41:33.948 20:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:41:34.209 20:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:41:34.210 20:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:34.210 20:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:41:34.210 20:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:34.210 20:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:41:34.210 20:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:41:34.468 nvme0n1 00:41:34.468 20:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:41:34.468 20:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:34.468 20:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:41:34.727 20:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:34.727 20:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:41:34.727 20:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:41:34.727 Running I/O for 2 seconds... 00:41:34.727 [2024-10-01 20:40:29.827658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028875458 00:41:34.727 [2024-10-01 20:40:29.828683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:34.727 [2024-10-01 20:40:29.828714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:41:34.727 [2024-10-01 20:40:29.840015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028863de8 00:41:34.727 [2024-10-01 20:40:29.841670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:34.727 [2024-10-01 20:40:29.841714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:41:34.727 [2024-10-01 20:40:29.847440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002885f1f8 00:41:34.727 [2024-10-01 20:40:29.848259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:34.728 [2024-10-01 20:40:29.848289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:41:34.728 [2024-10-01 20:40:29.859966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028870868 00:41:34.728 [2024-10-01 20:40:29.861442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:34.728 [2024-10-01 20:40:29.861483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:41:34.728 [2024-10-01 20:40:29.869531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002887b998 00:41:34.728 [2024-10-01 20:40:29.870599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:34.728 [2024-10-01 20:40:29.870641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:41:34.728 [2024-10-01 20:40:29.879415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028873298 00:41:34.728 [2024-10-01 20:40:29.880645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:34.728 [2024-10-01 20:40:29.880688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:41:34.728 [2024-10-01 20:40:29.890016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028860b48 00:41:34.728 [2024-10-01 20:40:29.891110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:34.728 [2024-10-01 20:40:29.891193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:41:34.728 [2024-10-01 20:40:29.899771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028869248 00:41:34.728 [2024-10-01 20:40:29.900974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:34.728 [2024-10-01 20:40:29.901015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:41:34.728 [2024-10-01 20:40:29.911879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x2000288721b8 00:41:34.728 [2024-10-01 20:40:29.913641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:34.728 [2024-10-01 20:40:29.913681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:41:34.728 [2024-10-01 20:40:29.919208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x2000288721b8 00:41:34.728 [2024-10-01 20:40:29.920120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:34.728 [2024-10-01 20:40:29.920172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:41:34.728 [2024-10-01 20:40:29.931057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028869248 00:41:34.728 [2024-10-01 20:40:29.932572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:34.728 [2024-10-01 20:40:29.932613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:41:34.728 [2024-10-01 20:40:29.940536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028862d08 00:41:34.728 [2024-10-01 20:40:29.941653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:34.728 [2024-10-01 20:40:29.941695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:41:34.728 [2024-10-01 20:40:29.950312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002886b408 00:41:34.728 [2024-10-01 20:40:29.951478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:34.728 [2024-10-01 20:40:29.951520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:41:34.728 [2024-10-01 20:40:29.962066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002886fff8 00:41:34.728 [2024-10-01 20:40:29.963923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:34.728 [2024-10-01 20:40:29.963968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:41:34.728 [2024-10-01 20:40:29.969444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028874378 00:41:34.728 [2024-10-01 20:40:29.970340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:34.728 [2024-10-01 20:40:29.970381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:41:34.988 [2024-10-01 20:40:29.982049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028867088 00:41:34.988 [2024-10-01 20:40:29.983715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:34.988 [2024-10-01 20:40:29.983759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:41:34.988 [2024-10-01 20:40:29.991563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028864ec8 00:41:34.988 [2024-10-01 20:40:29.992918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:34.988 [2024-10-01 20:40:29.992960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:41:34.988 [2024-10-01 20:40:30.001438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002886d5c8 00:41:34.988 [2024-10-01 20:40:30.002716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:34.988 [2024-10-01 20:40:30.002758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:41:34.988 [2024-10-01 20:40:30.012934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028876da8 00:41:34.988 [2024-10-01 20:40:30.013872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:34.988 [2024-10-01 20:40:30.013917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:41:34.988 [2024-10-01 20:40:30.025641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x2000288793a0 00:41:34.988 [2024-10-01 20:40:30.026852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:34.988 [2024-10-01 20:40:30.026926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:41:34.988 [2024-10-01 20:40:30.036806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028862498 00:41:34.988 [2024-10-01 20:40:30.037776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:34.988 [2024-10-01 20:40:30.037823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:41:34.988 [2024-10-01 20:40:30.050461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028873b08 00:41:34.988 [2024-10-01 20:40:30.052091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:34.988 [2024-10-01 20:40:30.052163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:41:34.988 [2024-10-01 20:40:30.061175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002886f788 00:41:34.988 [2024-10-01 20:40:30.062567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:34.988 [2024-10-01 20:40:30.062611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:41:34.988 [2024-10-01 20:40:30.070858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028876970 00:41:34.988 [2024-10-01 20:40:30.072280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:34.988 [2024-10-01 20:40:30.072321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:41:34.988 [2024-10-01 20:40:30.080393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002886de38 00:41:34.988 [2024-10-01 20:40:30.081530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:34.988 [2024-10-01 20:40:30.081570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:41:34.988 [2024-10-01 20:40:30.089788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002886de38 00:41:34.988 [2024-10-01 20:40:30.090813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:34.988 [2024-10-01 20:40:30.090855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:41:34.988 [2024-10-01 20:40:30.099249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x2000288786f8 00:41:34.988 [2024-10-01 20:40:30.100127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:34.988 [2024-10-01 20:40:30.100179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:41:34.988 [2024-10-01 20:40:30.108678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002886c0b0 00:41:34.989 [2024-10-01 20:40:30.109410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:34.989 [2024-10-01 20:40:30.109440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:41:34.989 [2024-10-01 20:40:30.120348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x2000288674c0 00:41:34.989 [2024-10-01 20:40:30.121271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:34.989 [2024-10-01 20:40:30.121313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:41:34.989 [2024-10-01 20:40:30.129580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002886afd0 00:41:34.989 [2024-10-01 20:40:30.130283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:34.989 [2024-10-01 20:40:30.130312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:41:34.989 [2024-10-01 20:40:30.138852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002886f350 00:41:34.989 [2024-10-01 20:40:30.139455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:34.989 [2024-10-01 20:40:30.139483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:41:34.989 [2024-10-01 20:40:30.150359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028864ec8 00:41:34.989 [2024-10-01 20:40:30.151653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:34.989 [2024-10-01 20:40:30.151694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:41:34.989 [2024-10-01 20:40:30.159775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002886b840 00:41:34.989 [2024-10-01 20:40:30.161012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:34.989 [2024-10-01 20:40:30.161053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:41:34.989 [2024-10-01 20:40:30.169330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002886d5c8 00:41:34.989 [2024-10-01 20:40:30.170377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:34.989 [2024-10-01 20:40:30.170419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:41:34.989 [2024-10-01 20:40:30.178686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002885fa68 00:41:34.989 [2024-10-01 20:40:30.179582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:34.989 [2024-10-01 20:40:30.179625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:41:34.989 [2024-10-01 20:40:30.190961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x2000288617f0 00:41:34.989 [2024-10-01 20:40:30.192665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:34.989 [2024-10-01 20:40:30.192706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:41:34.989 [2024-10-01 20:40:30.204013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028873f40 00:41:34.989 [2024-10-01 20:40:30.206197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:34.989 [2024-10-01 20:40:30.206253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:41:34.989 [2024-10-01 20:40:30.215046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x2000288613b8 00:41:34.989 [2024-10-01 20:40:30.216532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:34.989 [2024-10-01 20:40:30.216573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:41:34.989 [2024-10-01 20:40:30.224378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002886f788 00:41:34.989 [2024-10-01 20:40:30.225710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:34.989 [2024-10-01 20:40:30.225749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:41:34.989 [2024-10-01 20:40:30.233747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028868e10 00:41:34.989 [2024-10-01 20:40:30.234975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:34.989 [2024-10-01 20:40:30.235015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:41:35.249 [2024-10-01 20:40:30.244314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002885cc00 00:41:35.249 [2024-10-01 20:40:30.245540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.249 [2024-10-01 20:40:30.245583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:41:35.249 [2024-10-01 20:40:30.254099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x2000288628d0 00:41:35.249 [2024-10-01 20:40:30.255072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.249 [2024-10-01 20:40:30.255132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:41:35.249 [2024-10-01 20:40:30.265598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002886c4e8 00:41:35.249 [2024-10-01 20:40:30.267054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.249 [2024-10-01 20:40:30.267097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:41:35.249 [2024-10-01 20:40:30.274728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028876da8 00:41:35.249 [2024-10-01 20:40:30.275978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.249 [2024-10-01 20:40:30.276006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:41:35.249 [2024-10-01 20:40:30.284444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002887bdd0 00:41:35.249 [2024-10-01 20:40:30.285672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.249 [2024-10-01 20:40:30.285713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:41:35.249 [2024-10-01 20:40:30.293619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x2000288639b0 00:41:35.249 [2024-10-01 20:40:30.294555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.249 [2024-10-01 20:40:30.294598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:41:35.249 [2024-10-01 20:40:30.303289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x2000288628d0 00:41:35.249 [2024-10-01 20:40:30.304271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.249 [2024-10-01 20:40:30.304298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:41:35.249 [2024-10-01 20:40:30.315090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028877e88 00:41:35.249 [2024-10-01 20:40:30.316645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.249 [2024-10-01 20:40:30.316685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:41:35.249 [2024-10-01 20:40:30.322194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028876da8 00:41:35.249 [2024-10-01 20:40:30.322906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.249 [2024-10-01 20:40:30.322933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:41:35.249 [2024-10-01 20:40:30.334133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002885d470 00:41:35.249 [2024-10-01 20:40:30.335340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.249 [2024-10-01 20:40:30.335366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:41:35.249 [2024-10-01 20:40:30.345416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028875cc8 00:41:35.249 [2024-10-01 20:40:30.346941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.249 [2024-10-01 20:40:30.346982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:41:35.249 [2024-10-01 20:40:30.354925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002886c4e8 00:41:35.249 [2024-10-01 20:40:30.356511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.249 [2024-10-01 20:40:30.356551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:41:35.249 [2024-10-01 20:40:30.364499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028875458 00:41:35.249 [2024-10-01 20:40:30.365784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.249 [2024-10-01 20:40:30.365825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:41:35.249 [2024-10-01 20:40:30.373862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002886bc78 00:41:35.249 [2024-10-01 20:40:30.375056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.249 [2024-10-01 20:40:30.375097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:41:35.249 [2024-10-01 20:40:30.385275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028875cc8 00:41:35.249 [2024-10-01 20:40:30.386950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.249 [2024-10-01 20:40:30.386991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:41:35.249 [2024-10-01 20:40:30.392446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x2000288793a0 00:41:35.249 [2024-10-01 20:40:30.393336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.249 [2024-10-01 20:40:30.393377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:41:35.249 [2024-10-01 20:40:30.404116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028860f80 00:41:35.249 [2024-10-01 20:40:30.405572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.249 [2024-10-01 20:40:30.405613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:41:35.249 [2024-10-01 20:40:30.413505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002885fea0 00:41:35.249 [2024-10-01 20:40:30.414701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.250 [2024-10-01 20:40:30.414743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:41:35.250 [2024-10-01 20:40:30.423197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002885f630 00:41:35.250 [2024-10-01 20:40:30.424416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.250 [2024-10-01 20:40:30.424456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:41:35.250 [2024-10-01 20:40:30.434825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x2000288639b0 00:41:35.250 [2024-10-01 20:40:30.436600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.250 [2024-10-01 20:40:30.436640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:35.250 [2024-10-01 20:40:30.441923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x2000288771e0 00:41:35.250 [2024-10-01 20:40:30.442924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.250 [2024-10-01 20:40:30.442966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:35.250 [2024-10-01 20:40:30.453763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028873f40 00:41:35.250 [2024-10-01 20:40:30.455307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.250 [2024-10-01 20:40:30.455348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:41:35.250 [2024-10-01 20:40:30.460871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x2000288736d0 00:41:35.250 [2024-10-01 20:40:30.461548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.250 [2024-10-01 20:40:30.461578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:41:35.250 [2024-10-01 20:40:30.473733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028868168 00:41:35.250 [2024-10-01 20:40:30.475085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.250 [2024-10-01 20:40:30.475154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:41:35.250 [2024-10-01 20:40:30.485288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x2000288782c0 00:41:35.250 [2024-10-01 20:40:30.486300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.250 [2024-10-01 20:40:30.486346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:41:35.250 [2024-10-01 20:40:30.497162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028871948 00:41:35.250 [2024-10-01 20:40:30.498746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.250 [2024-10-01 20:40:30.498796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:41:35.510 [2024-10-01 20:40:30.510052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002885f1f8 00:41:35.510 [2024-10-01 20:40:30.511828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.510 [2024-10-01 20:40:30.511873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:41:35.510 [2024-10-01 20:40:30.520327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002886fbc0 00:41:35.510 [2024-10-01 20:40:30.521775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.510 [2024-10-01 20:40:30.521817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:41:35.510 [2024-10-01 20:40:30.530070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002886eae0 00:41:35.510 [2024-10-01 20:40:30.531540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.510 [2024-10-01 20:40:30.531583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:41:35.510 [2024-10-01 20:40:30.540037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002885e550 00:41:35.510 [2024-10-01 20:40:30.541262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.510 [2024-10-01 20:40:30.541304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:41:35.510 [2024-10-01 20:40:30.549939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028879c10 00:41:35.510 [2024-10-01 20:40:30.551058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.510 [2024-10-01 20:40:30.551101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:41:35.510 [2024-10-01 20:40:30.559850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x2000288725f0 00:41:35.510 [2024-10-01 20:40:30.560787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.510 [2024-10-01 20:40:30.560830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:41:35.510 [2024-10-01 20:40:30.569902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028878b30 00:41:35.510 [2024-10-01 20:40:30.570729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.510 [2024-10-01 20:40:30.570790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:41:35.510 [2024-10-01 20:40:30.582014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028869ef0 00:41:35.510 [2024-10-01 20:40:30.582961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.510 [2024-10-01 20:40:30.583005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:41:35.510 [2024-10-01 20:40:30.592054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002887a8b8 00:41:35.510 [2024-10-01 20:40:30.592850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.510 [2024-10-01 20:40:30.592882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:41:35.510 [2024-10-01 20:40:30.601900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002885f630 00:41:35.510 [2024-10-01 20:40:30.602582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.510 [2024-10-01 20:40:30.602612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:41:35.510 [2024-10-01 20:40:30.613962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028869680 00:41:35.510 [2024-10-01 20:40:30.615419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.510 [2024-10-01 20:40:30.615465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:41:35.510 [2024-10-01 20:40:30.623603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x2000288602d8 00:41:35.510 [2024-10-01 20:40:30.624934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.510 [2024-10-01 20:40:30.624980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:41:35.510 [2024-10-01 20:40:30.633769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028873f40 00:41:35.510 [2024-10-01 20:40:30.635065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.510 [2024-10-01 20:40:30.635109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:41:35.510 [2024-10-01 20:40:30.646264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028871d80 00:41:35.510 [2024-10-01 20:40:30.648097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.510 [2024-10-01 20:40:30.648146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:41:35.510 [2024-10-01 20:40:30.653728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028875458 00:41:35.510 [2024-10-01 20:40:30.654826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:3413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.510 [2024-10-01 20:40:30.654869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:41:35.510 [2024-10-01 20:40:30.666173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002885d038 00:41:35.510 [2024-10-01 20:40:30.667786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.511 [2024-10-01 20:40:30.667850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:41:35.511 [2024-10-01 20:40:30.673763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028866c50 00:41:35.511 [2024-10-01 20:40:30.674440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:3546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.511 [2024-10-01 20:40:30.674486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:41:35.511 [2024-10-01 20:40:30.685865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028865b70 00:41:35.511 [2024-10-01 20:40:30.687171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.511 [2024-10-01 20:40:30.687238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:41:35.511 [2024-10-01 20:40:30.695054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028862d08 00:41:35.511 [2024-10-01 20:40:30.696067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.511 [2024-10-01 20:40:30.696125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:41:35.511 [2024-10-01 20:40:30.704913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002886a328 00:41:35.511 [2024-10-01 20:40:30.705956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.511 [2024-10-01 20:40:30.705997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:41:35.511 [2024-10-01 20:40:30.716958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002887c208 00:41:35.511 [2024-10-01 20:40:30.718609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.511 [2024-10-01 20:40:30.718649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:41:35.511 [2024-10-01 20:40:30.724162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028867d30 00:41:35.511 [2024-10-01 20:40:30.724903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.511 [2024-10-01 20:40:30.724931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:41:35.511 [2024-10-01 20:40:30.736397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x2000288689d8 00:41:35.511 [2024-10-01 20:40:30.737796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.511 [2024-10-01 20:40:30.737838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:41:35.511 [2024-10-01 20:40:30.745556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x2000288613b8 00:41:35.511 [2024-10-01 20:40:30.746639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.511 [2024-10-01 20:40:30.746680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:41:35.511 [2024-10-01 20:40:30.755186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028878f68 00:41:35.511 [2024-10-01 20:40:30.756287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.511 [2024-10-01 20:40:30.756329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:41:35.771 [2024-10-01 20:40:30.768297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002887d720 00:41:35.771 [2024-10-01 20:40:30.769918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.771 [2024-10-01 20:40:30.769960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:41:35.771 [2024-10-01 20:40:30.775517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002886e6a8 00:41:35.771 [2024-10-01 20:40:30.776391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.771 [2024-10-01 20:40:30.776432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:41:35.771 [2024-10-01 20:40:30.787233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x2000288786f8 00:41:35.771 [2024-10-01 20:40:30.788696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.771 [2024-10-01 20:40:30.788736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:41:35.771 [2024-10-01 20:40:30.796491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002885dce0 00:41:35.771 [2024-10-01 20:40:30.797690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.771 [2024-10-01 20:40:30.797731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:41:35.771 [2024-10-01 20:40:30.806227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028869248 00:41:35.771 [2024-10-01 20:40:30.807453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.771 [2024-10-01 20:40:30.807493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:41:35.771 [2024-10-01 20:40:30.817552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x2000288602d8 00:41:35.771 24728.00 IOPS, 96.59 MiB/s [2024-10-01 20:40:30.819074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.771 [2024-10-01 20:40:30.819113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:41:35.771 [2024-10-01 20:40:30.825349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002886eae0 00:41:35.771 [2024-10-01 20:40:30.826245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.771 [2024-10-01 20:40:30.826286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:41:35.771 [2024-10-01 20:40:30.837185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002886bc78 00:41:35.771 [2024-10-01 20:40:30.838654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.771 [2024-10-01 20:40:30.838695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:41:35.771 [2024-10-01 20:40:30.846516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002886ef18 00:41:35.771 [2024-10-01 20:40:30.847740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:18137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.771 [2024-10-01 20:40:30.847782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:41:35.771 [2024-10-01 20:40:30.856469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x2000288782c0 00:41:35.771 [2024-10-01 20:40:30.857775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.771 [2024-10-01 20:40:30.857815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:41:35.771 [2024-10-01 20:40:30.865729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028875890 00:41:35.771 [2024-10-01 20:40:30.866721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.771 [2024-10-01 20:40:30.866763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:41:35.771 [2024-10-01 20:40:30.875483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x2000288736d0 00:41:35.771 [2024-10-01 20:40:30.876504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.771 [2024-10-01 20:40:30.876545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:41:35.771 [2024-10-01 20:40:30.887158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002886d5c8 00:41:35.771 [2024-10-01 20:40:30.888694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.771 [2024-10-01 20:40:30.888735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:41:35.771 [2024-10-01 20:40:30.894306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002886ef18 00:41:35.771 [2024-10-01 20:40:30.894988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.771 [2024-10-01 20:40:30.895014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:41:35.771 [2024-10-01 20:40:30.906003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028860b48 00:41:35.771 [2024-10-01 20:40:30.907380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.771 [2024-10-01 20:40:30.907421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:41:35.772 [2024-10-01 20:40:30.915226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002885dce0 00:41:35.772 [2024-10-01 20:40:30.916323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.772 [2024-10-01 20:40:30.916364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:41:35.772 [2024-10-01 20:40:30.924840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x2000288678f8 00:41:35.772 [2024-10-01 20:40:30.925910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.772 [2024-10-01 20:40:30.925949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:41:35.772 [2024-10-01 20:40:30.936607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002887a8b8 00:41:35.772 [2024-10-01 20:40:30.938196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.772 [2024-10-01 20:40:30.938237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:41:35.772 [2024-10-01 20:40:30.946546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028865300 00:41:35.772 [2024-10-01 20:40:30.948215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.772 [2024-10-01 20:40:30.948256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:41:35.772 [2024-10-01 20:40:30.955967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002886b408 00:41:35.772 [2024-10-01 20:40:30.957610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.772 [2024-10-01 20:40:30.957650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:41:35.772 [2024-10-01 20:40:30.965714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028873b08 00:41:35.772 [2024-10-01 20:40:30.967103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.772 [2024-10-01 20:40:30.967190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:41:35.772 [2024-10-01 20:40:30.975328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002886c0b0 00:41:35.772 [2024-10-01 20:40:30.976608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.772 [2024-10-01 20:40:30.976649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:41:35.772 [2024-10-01 20:40:30.984881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028867088 00:41:35.772 [2024-10-01 20:40:30.986018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.772 [2024-10-01 20:40:30.986061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:41:35.772 [2024-10-01 20:40:30.994491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002887b998 00:41:35.772 [2024-10-01 20:40:30.995465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.772 [2024-10-01 20:40:30.995538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:41:35.772 [2024-10-01 20:40:31.003901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028874378 00:41:35.772 [2024-10-01 20:40:31.004770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.772 [2024-10-01 20:40:31.004830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:41:35.772 [2024-10-01 20:40:31.015712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028876100 00:41:35.772 [2024-10-01 20:40:31.016709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:35.772 [2024-10-01 20:40:31.016752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:36.032 [2024-10-01 20:40:31.025986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002885d470 00:41:36.032 [2024-10-01 20:40:31.026960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.032 [2024-10-01 20:40:31.027003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:41:36.032 [2024-10-01 20:40:31.036412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028867d30 00:41:36.032 [2024-10-01 20:40:31.037095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.032 [2024-10-01 20:40:31.037151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:41:36.032 [2024-10-01 20:40:31.047153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028876da8 00:41:36.032 [2024-10-01 20:40:31.047802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.032 [2024-10-01 20:40:31.047859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:41:36.032 [2024-10-01 20:40:31.060487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028875458 00:41:36.032 [2024-10-01 20:40:31.061848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.032 [2024-10-01 20:40:31.061890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:41:36.032 [2024-10-01 20:40:31.070686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002887c640 00:41:36.032 [2024-10-01 20:40:31.071934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.032 [2024-10-01 20:40:31.071978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:41:36.032 [2024-10-01 20:40:31.081819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028875890 00:41:36.032 [2024-10-01 20:40:31.082764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.032 [2024-10-01 20:40:31.082808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:41:36.032 [2024-10-01 20:40:31.091407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x2000288602d8 00:41:36.032 [2024-10-01 20:40:31.092093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.032 [2024-10-01 20:40:31.092133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:41:36.032 [2024-10-01 20:40:31.100894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028870430 00:41:36.032 [2024-10-01 20:40:31.101472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.032 [2024-10-01 20:40:31.101502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:41:36.032 [2024-10-01 20:40:31.112032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002887c640 00:41:36.032 [2024-10-01 20:40:31.113315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.032 [2024-10-01 20:40:31.113358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:41:36.032 [2024-10-01 20:40:31.121150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002885d8a8 00:41:36.032 [2024-10-01 20:40:31.122298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.032 [2024-10-01 20:40:31.122340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:41:36.032 [2024-10-01 20:40:31.130653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028866818 00:41:36.032 [2024-10-01 20:40:31.131801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:23364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.032 [2024-10-01 20:40:31.131866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:41:36.032 [2024-10-01 20:40:31.142295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028860b48 00:41:36.032 [2024-10-01 20:40:31.143987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.032 [2024-10-01 20:40:31.144029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:41:36.032 [2024-10-01 20:40:31.149460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002886fff8 00:41:36.032 [2024-10-01 20:40:31.150371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.032 [2024-10-01 20:40:31.150412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:41:36.032 [2024-10-01 20:40:31.161191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x2000288628d0 00:41:36.032 [2024-10-01 20:40:31.162623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.033 [2024-10-01 20:40:31.162664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:41:36.033 [2024-10-01 20:40:31.170321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028864ec8 00:41:36.033 [2024-10-01 20:40:31.171523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.033 [2024-10-01 20:40:31.171564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:41:36.033 [2024-10-01 20:40:31.179753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002887c640 00:41:36.033 [2024-10-01 20:40:31.180989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.033 [2024-10-01 20:40:31.181029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:41:36.033 [2024-10-01 20:40:31.191584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028875020 00:41:36.033 [2024-10-01 20:40:31.193326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.033 [2024-10-01 20:40:31.193367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:36.033 [2024-10-01 20:40:31.198694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028865300 00:41:36.033 [2024-10-01 20:40:31.199721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.033 [2024-10-01 20:40:31.199760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:36.033 [2024-10-01 20:40:31.210644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002886a328 00:41:36.033 [2024-10-01 20:40:31.212265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.033 [2024-10-01 20:40:31.212307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:41:36.033 [2024-10-01 20:40:31.218044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002886c4e8 00:41:36.033 [2024-10-01 20:40:31.218750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.033 [2024-10-01 20:40:31.218779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:41:36.033 [2024-10-01 20:40:31.230187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002886f350 00:41:36.033 [2024-10-01 20:40:31.231487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.033 [2024-10-01 20:40:31.231530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:41:36.033 [2024-10-01 20:40:31.239275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002887bdd0 00:41:36.033 [2024-10-01 20:40:31.240328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.033 [2024-10-01 20:40:31.240368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:41:36.033 [2024-10-01 20:40:31.248818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028864658 00:41:36.033 [2024-10-01 20:40:31.249867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.033 [2024-10-01 20:40:31.249907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:41:36.033 [2024-10-01 20:40:31.258724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028862d08 00:41:36.033 [2024-10-01 20:40:31.259310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.033 [2024-10-01 20:40:31.259339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:41:36.033 [2024-10-01 20:40:31.269917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002887df90 00:41:36.033 [2024-10-01 20:40:31.271261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.033 [2024-10-01 20:40:31.271302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:41:36.033 [2024-10-01 20:40:31.279238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002887c640 00:41:36.033 [2024-10-01 20:40:31.280603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.033 [2024-10-01 20:40:31.280646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:41:36.293 [2024-10-01 20:40:31.289471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028877a50 00:41:36.293 [2024-10-01 20:40:31.290602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.293 [2024-10-01 20:40:31.290645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:41:36.293 [2024-10-01 20:40:31.299227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002886d190 00:41:36.293 [2024-10-01 20:40:31.300383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.293 [2024-10-01 20:40:31.300425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:41:36.293 [2024-10-01 20:40:31.311085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002886de38 00:41:36.293 [2024-10-01 20:40:31.312731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.293 [2024-10-01 20:40:31.312772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:41:36.293 [2024-10-01 20:40:31.318319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028863578 00:41:36.293 [2024-10-01 20:40:31.319126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.293 [2024-10-01 20:40:31.319175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:41:36.293 [2024-10-01 20:40:31.328283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002887d2e8 00:41:36.293 [2024-10-01 20:40:31.329041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.293 [2024-10-01 20:40:31.329070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:41:36.293 [2024-10-01 20:40:31.339859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028878b30 00:41:36.293 [2024-10-01 20:40:31.340828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.293 [2024-10-01 20:40:31.340871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:36.293 [2024-10-01 20:40:31.349246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002886e270 00:41:36.293 [2024-10-01 20:40:31.350093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.293 [2024-10-01 20:40:31.350143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:41:36.293 [2024-10-01 20:40:31.358597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028860b48 00:41:36.293 [2024-10-01 20:40:31.359245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.293 [2024-10-01 20:40:31.359273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:36.293 [2024-10-01 20:40:31.367800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028867088 00:41:36.293 [2024-10-01 20:40:31.368396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.293 [2024-10-01 20:40:31.368424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:41:36.293 [2024-10-01 20:40:31.378830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028875020 00:41:36.293 [2024-10-01 20:40:31.380095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.293 [2024-10-01 20:40:31.380163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:41:36.293 [2024-10-01 20:40:31.388284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002887a048 00:41:36.293 [2024-10-01 20:40:31.389400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.293 [2024-10-01 20:40:31.389440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:41:36.293 [2024-10-01 20:40:31.397666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002885e118 00:41:36.293 [2024-10-01 20:40:31.398678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.293 [2024-10-01 20:40:31.398718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:41:36.293 [2024-10-01 20:40:31.408573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028864658 00:41:36.293 [2024-10-01 20:40:31.409970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.293 [2024-10-01 20:40:31.410010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:41:36.293 [2024-10-01 20:40:31.418103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002886d5c8 00:41:36.293 [2024-10-01 20:40:31.419370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.293 [2024-10-01 20:40:31.419411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:41:36.293 [2024-10-01 20:40:31.427341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028863140 00:41:36.293 [2024-10-01 20:40:31.428502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.293 [2024-10-01 20:40:31.428542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:41:36.293 [2024-10-01 20:40:31.437352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002887a048 00:41:36.294 [2024-10-01 20:40:31.438192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.294 [2024-10-01 20:40:31.438242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:41:36.294 [2024-10-01 20:40:31.446865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028864658 00:41:36.294 [2024-10-01 20:40:31.447568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.294 [2024-10-01 20:40:31.447597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:41:36.294 [2024-10-01 20:40:31.456193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028875458 00:41:36.294 [2024-10-01 20:40:31.456758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.294 [2024-10-01 20:40:31.456786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:41:36.294 [2024-10-01 20:40:31.467482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028875cc8 00:41:36.294 [2024-10-01 20:40:31.468777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.294 [2024-10-01 20:40:31.468818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:41:36.294 [2024-10-01 20:40:31.476843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028870430 00:41:36.294 [2024-10-01 20:40:31.478054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.294 [2024-10-01 20:40:31.478095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:41:36.294 [2024-10-01 20:40:31.485995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028864220 00:41:36.294 [2024-10-01 20:40:31.486997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.294 [2024-10-01 20:40:31.487040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:41:36.294 [2024-10-01 20:40:31.495554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002886afd0 00:41:36.294 [2024-10-01 20:40:31.496578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.294 [2024-10-01 20:40:31.496619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:41:36.294 [2024-10-01 20:40:31.507322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028876da8 00:41:36.294 [2024-10-01 20:40:31.508919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.294 [2024-10-01 20:40:31.508959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:41:36.294 [2024-10-01 20:40:31.514533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002885f630 00:41:36.294 [2024-10-01 20:40:31.515273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.294 [2024-10-01 20:40:31.515302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:41:36.294 [2024-10-01 20:40:31.524467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028877a50 00:41:36.294 [2024-10-01 20:40:31.525225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.294 [2024-10-01 20:40:31.525253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:41:36.294 [2024-10-01 20:40:31.535949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028875cc8 00:41:36.294 [2024-10-01 20:40:31.536850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.294 [2024-10-01 20:40:31.536892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:41:36.554 [2024-10-01 20:40:31.546036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002887bdd0 00:41:36.554 [2024-10-01 20:40:31.546945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.554 [2024-10-01 20:40:31.546988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:41:36.554 [2024-10-01 20:40:31.556015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028862d08 00:41:36.554 [2024-10-01 20:40:31.556621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.554 [2024-10-01 20:40:31.556650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:41:36.554 [2024-10-01 20:40:31.567209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002885fa68 00:41:36.554 [2024-10-01 20:40:31.568578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.554 [2024-10-01 20:40:31.568618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:41:36.554 [2024-10-01 20:40:31.576537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002886cd58 00:41:36.554 [2024-10-01 20:40:31.577749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.554 [2024-10-01 20:40:31.577790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:41:36.554 [2024-10-01 20:40:31.585955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x2000288793a0 00:41:36.554 [2024-10-01 20:40:31.587026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.554 [2024-10-01 20:40:31.587068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:41:36.554 [2024-10-01 20:40:31.597686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x2000288736d0 00:41:36.554 [2024-10-01 20:40:31.599452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.554 [2024-10-01 20:40:31.599494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:41:36.554 [2024-10-01 20:40:31.604804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028865fa8 00:41:36.554 [2024-10-01 20:40:31.605751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.554 [2024-10-01 20:40:31.605795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:41:36.554 [2024-10-01 20:40:31.616911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028867d30 00:41:36.554 [2024-10-01 20:40:31.618434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.554 [2024-10-01 20:40:31.618476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:41:36.554 [2024-10-01 20:40:31.624122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028876970 00:41:36.554 [2024-10-01 20:40:31.624832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.554 [2024-10-01 20:40:31.624859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:41:36.554 [2024-10-01 20:40:31.636233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002886da00 00:41:36.555 [2024-10-01 20:40:31.637401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.555 [2024-10-01 20:40:31.637442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:41:36.555 [2024-10-01 20:40:31.645552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002885edc0 00:41:36.555 [2024-10-01 20:40:31.646564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.555 [2024-10-01 20:40:31.646605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:41:36.555 [2024-10-01 20:40:31.655270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028870868 00:41:36.555 [2024-10-01 20:40:31.656397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.555 [2024-10-01 20:40:31.656438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:41:36.555 [2024-10-01 20:40:31.666869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028862060 00:41:36.555 [2024-10-01 20:40:31.668564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.555 [2024-10-01 20:40:31.668604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:41:36.555 [2024-10-01 20:40:31.674436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028862d08 00:41:36.555 [2024-10-01 20:40:31.675358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:17563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.555 [2024-10-01 20:40:31.675401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:41:36.555 [2024-10-01 20:40:31.687371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028877a50 00:41:36.555 [2024-10-01 20:40:31.688965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.555 [2024-10-01 20:40:31.689009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:41:36.555 [2024-10-01 20:40:31.697987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028860b48 00:41:36.555 [2024-10-01 20:40:31.699273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.555 [2024-10-01 20:40:31.699317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:41:36.555 [2024-10-01 20:40:31.708629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002886c920 00:41:36.555 [2024-10-01 20:40:31.709973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.555 [2024-10-01 20:40:31.710017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:41:36.555 [2024-10-01 20:40:31.721734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002885d038 00:41:36.555 [2024-10-01 20:40:31.723527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.555 [2024-10-01 20:40:31.723572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:41:36.555 [2024-10-01 20:40:31.729127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028868e10 00:41:36.555 [2024-10-01 20:40:31.730082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.555 [2024-10-01 20:40:31.730145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:41:36.555 [2024-10-01 20:40:31.741659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002885e118 00:41:36.555 [2024-10-01 20:40:31.743210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.555 [2024-10-01 20:40:31.743252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:41:36.555 [2024-10-01 20:40:31.751306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028875020 00:41:36.555 [2024-10-01 20:40:31.752600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.555 [2024-10-01 20:40:31.752642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:36.555 [2024-10-01 20:40:31.761509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x2000288747b0 00:41:36.555 [2024-10-01 20:40:31.762797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.555 [2024-10-01 20:40:31.762840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:41:36.555 [2024-10-01 20:40:31.770977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002885d038 00:41:36.555 [2024-10-01 20:40:31.772006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.555 [2024-10-01 20:40:31.772036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:41:36.555 [2024-10-01 20:40:31.781028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028872e60 00:41:36.555 [2024-10-01 20:40:31.782087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.555 [2024-10-01 20:40:31.782138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:41:36.555 [2024-10-01 20:40:31.793246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002886e270 00:41:36.555 [2024-10-01 20:40:31.794862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.555 [2024-10-01 20:40:31.794905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:41:36.555 [2024-10-01 20:40:31.800701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x200028872a28 00:41:36.555 [2024-10-01 20:40:31.801446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.555 [2024-10-01 20:40:31.801476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:41:36.815 [2024-10-01 20:40:31.814237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b77e0) with pdu=0x20002887b128 00:41:36.815 [2024-10-01 20:40:31.815595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:36.815 [2024-10-01 20:40:31.815639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:41:36.815 25026.00 IOPS, 97.76 MiB/s 00:41:36.815 Latency(us) 00:41:36.815 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:36.815 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:36.815 nvme0n1 : 2.01 25005.08 97.68 0.00 0.00 5113.74 2159.71 14894.55 00:41:36.815 =================================================================================================================== 00:41:36.815 Total : 25005.08 97.68 0.00 0.00 5113.74 2159.71 14894.55 00:41:36.815 { 00:41:36.815 "results": [ 00:41:36.815 { 00:41:36.815 "job": "nvme0n1", 00:41:36.815 "core_mask": "0x2", 00:41:36.815 "workload": "randwrite", 00:41:36.815 "status": "finished", 00:41:36.815 "queue_depth": 128, 00:41:36.815 "io_size": 4096, 00:41:36.815 "runtime": 2.008152, 00:41:36.815 "iops": 25005.079296786298, 00:41:36.815 "mibps": 97.67609100307148, 00:41:36.815 "io_failed": 0, 00:41:36.815 "io_timeout": 0, 00:41:36.815 "avg_latency_us": 5113.736543158916, 00:41:36.815 "min_latency_us": 2159.7090909090907, 00:41:36.815 "max_latency_us": 14894.545454545454 00:41:36.815 } 00:41:36.815 ], 00:41:36.815 "core_count": 1 00:41:36.815 } 00:41:36.815 20:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:41:36.815 20:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:41:36.815 20:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:41:36.815 | .driver_specific 00:41:36.815 | .nvme_error 00:41:36.815 | .status_code 00:41:36.815 | .command_transient_transport_error' 00:41:36.815 20:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:41:37.074 20:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 196 > 0 )) 00:41:37.074 20:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94121 00:41:37.074 20:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 94121 ']' 00:41:37.074 20:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 94121 00:41:37.074 20:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:41:37.074 20:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:37.074 20:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94121 00:41:37.074 20:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:41:37.074 20:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:41:37.074 killing process with pid 94121 00:41:37.074 20:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94121' 00:41:37.074 20:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 94121 00:41:37.074 Received shutdown signal, test time was about 2.000000 seconds 00:41:37.074 00:41:37.074 Latency(us) 00:41:37.074 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:37.074 =================================================================================================================== 00:41:37.074 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:37.074 20:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 94121 00:41:37.334 20:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:41:37.334 20:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:41:37.334 20:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:41:37.334 20:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:41:37.334 20:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:41:37.334 20:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94198 00:41:37.334 20:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:41:37.334 20:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94198 /var/tmp/bperf.sock 00:41:37.334 20:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 94198 ']' 00:41:37.334 20:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:37.334 20:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:37.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:37.334 20:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:37.334 20:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:37.334 20:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:41:37.334 I/O size of 131072 is greater than zero copy threshold (65536). 00:41:37.334 Zero copy mechanism will not be used. 00:41:37.334 [2024-10-01 20:40:32.412114] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:41:37.334 [2024-10-01 20:40:32.412222] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94198 ] 00:41:37.334 [2024-10-01 20:40:32.540541] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:37.593 [2024-10-01 20:40:32.590347] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:41:37.593 20:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:37.593 20:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:41:37.593 20:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:41:37.593 20:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:41:37.852 20:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:41:37.852 20:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:37.852 20:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:41:37.852 20:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:37.852 20:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:41:37.852 20:40:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:41:38.111 nvme0n1 00:41:38.111 20:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:41:38.111 20:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:38.111 20:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:41:38.111 20:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:38.111 20:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:41:38.112 20:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:41:38.372 I/O size of 131072 is greater than zero copy threshold (65536). 00:41:38.372 Zero copy mechanism will not be used. 00:41:38.372 Running I/O for 2 seconds... 00:41:38.372 [2024-10-01 20:40:33.450261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.372 [2024-10-01 20:40:33.450544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.372 [2024-10-01 20:40:33.450570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:38.372 [2024-10-01 20:40:33.454829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.372 [2024-10-01 20:40:33.455142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.372 [2024-10-01 20:40:33.455179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:38.372 [2024-10-01 20:40:33.459432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.372 [2024-10-01 20:40:33.459699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.372 [2024-10-01 20:40:33.459725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:38.372 [2024-10-01 20:40:33.463857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.372 [2024-10-01 20:40:33.464112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.372 [2024-10-01 20:40:33.464146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:38.372 [2024-10-01 20:40:33.468301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.372 [2024-10-01 20:40:33.468549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.372 [2024-10-01 20:40:33.468573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:38.372 [2024-10-01 20:40:33.472855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.372 [2024-10-01 20:40:33.473106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.372 [2024-10-01 20:40:33.473154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:38.372 [2024-10-01 20:40:33.477321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.372 [2024-10-01 20:40:33.477595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.372 [2024-10-01 20:40:33.477621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:38.372 [2024-10-01 20:40:33.481769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.372 [2024-10-01 20:40:33.482018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.372 [2024-10-01 20:40:33.482044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:38.372 [2024-10-01 20:40:33.486189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.372 [2024-10-01 20:40:33.486439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.372 [2024-10-01 20:40:33.486459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:38.372 [2024-10-01 20:40:33.490553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.372 [2024-10-01 20:40:33.490816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.372 [2024-10-01 20:40:33.490867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:38.372 [2024-10-01 20:40:33.494881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.372 [2024-10-01 20:40:33.495139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.372 [2024-10-01 20:40:33.495163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:38.372 [2024-10-01 20:40:33.499268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.372 [2024-10-01 20:40:33.499515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.372 [2024-10-01 20:40:33.499540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:38.372 [2024-10-01 20:40:33.503628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.372 [2024-10-01 20:40:33.503905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.372 [2024-10-01 20:40:33.503931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:38.372 [2024-10-01 20:40:33.507993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.373 [2024-10-01 20:40:33.508285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.373 [2024-10-01 20:40:33.508308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:38.373 [2024-10-01 20:40:33.512474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.373 [2024-10-01 20:40:33.512740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.373 [2024-10-01 20:40:33.512764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:38.373 [2024-10-01 20:40:33.516914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.373 [2024-10-01 20:40:33.517189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.373 [2024-10-01 20:40:33.517213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:38.373 [2024-10-01 20:40:33.521410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.373 [2024-10-01 20:40:33.521676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.373 [2024-10-01 20:40:33.521700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:38.373 [2024-10-01 20:40:33.525835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.373 [2024-10-01 20:40:33.526084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.373 [2024-10-01 20:40:33.526118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:38.373 [2024-10-01 20:40:33.530306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.373 [2024-10-01 20:40:33.530555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.373 [2024-10-01 20:40:33.530578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:38.373 [2024-10-01 20:40:33.534714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.373 [2024-10-01 20:40:33.534963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.373 [2024-10-01 20:40:33.534987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:38.373 [2024-10-01 20:40:33.539106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.373 [2024-10-01 20:40:33.539376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.373 [2024-10-01 20:40:33.539400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:38.373 [2024-10-01 20:40:33.543493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.373 [2024-10-01 20:40:33.543753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.373 [2024-10-01 20:40:33.543777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:38.373 [2024-10-01 20:40:33.547970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.373 [2024-10-01 20:40:33.548255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.373 [2024-10-01 20:40:33.548279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:38.373 [2024-10-01 20:40:33.552376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.373 [2024-10-01 20:40:33.552632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.373 [2024-10-01 20:40:33.552672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:38.373 [2024-10-01 20:40:33.556844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.373 [2024-10-01 20:40:33.557105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.373 [2024-10-01 20:40:33.557152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:38.373 [2024-10-01 20:40:33.561387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.373 [2024-10-01 20:40:33.561667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.373 [2024-10-01 20:40:33.561691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:38.373 [2024-10-01 20:40:33.565967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.373 [2024-10-01 20:40:33.566258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.373 [2024-10-01 20:40:33.566282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:38.373 [2024-10-01 20:40:33.570408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.373 [2024-10-01 20:40:33.570676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.373 [2024-10-01 20:40:33.570700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:38.373 [2024-10-01 20:40:33.574845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.373 [2024-10-01 20:40:33.575107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.373 [2024-10-01 20:40:33.575139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:38.373 [2024-10-01 20:40:33.579329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.373 [2024-10-01 20:40:33.579590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.373 [2024-10-01 20:40:33.579616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:38.373 [2024-10-01 20:40:33.583736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.373 [2024-10-01 20:40:33.584033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.373 [2024-10-01 20:40:33.584057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:38.373 [2024-10-01 20:40:33.588203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.373 [2024-10-01 20:40:33.588452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.373 [2024-10-01 20:40:33.588476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:38.373 [2024-10-01 20:40:33.592669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.373 [2024-10-01 20:40:33.592932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.373 [2024-10-01 20:40:33.592956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:38.373 [2024-10-01 20:40:33.597063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.373 [2024-10-01 20:40:33.597350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.374 [2024-10-01 20:40:33.597375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:38.374 [2024-10-01 20:40:33.601662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.374 [2024-10-01 20:40:33.601928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.374 [2024-10-01 20:40:33.601952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:38.374 [2024-10-01 20:40:33.606099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.374 [2024-10-01 20:40:33.606372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.374 [2024-10-01 20:40:33.606396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:38.374 [2024-10-01 20:40:33.610589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.374 [2024-10-01 20:40:33.610850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.374 [2024-10-01 20:40:33.610875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:38.374 [2024-10-01 20:40:33.615147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.374 [2024-10-01 20:40:33.615409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.374 [2024-10-01 20:40:33.615432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:38.374 [2024-10-01 20:40:33.619721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.374 [2024-10-01 20:40:33.620055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.374 [2024-10-01 20:40:33.620080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:38.637 [2024-10-01 20:40:33.624602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.637 [2024-10-01 20:40:33.624890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.637 [2024-10-01 20:40:33.624915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:38.637 [2024-10-01 20:40:33.629230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.637 [2024-10-01 20:40:33.629500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.637 [2024-10-01 20:40:33.629540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:38.637 [2024-10-01 20:40:33.633848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.637 [2024-10-01 20:40:33.634094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.637 [2024-10-01 20:40:33.634128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:38.637 [2024-10-01 20:40:33.638281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.637 [2024-10-01 20:40:33.638531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.637 [2024-10-01 20:40:33.638554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:38.637 [2024-10-01 20:40:33.642737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.637 [2024-10-01 20:40:33.642987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.637 [2024-10-01 20:40:33.643011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:38.637 [2024-10-01 20:40:33.647191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.637 [2024-10-01 20:40:33.647452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.637 [2024-10-01 20:40:33.647477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:38.637 [2024-10-01 20:40:33.651556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.637 [2024-10-01 20:40:33.651803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.637 [2024-10-01 20:40:33.651853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:38.637 [2024-10-01 20:40:33.655991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.637 [2024-10-01 20:40:33.656282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.637 [2024-10-01 20:40:33.656305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:38.637 [2024-10-01 20:40:33.660440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.637 [2024-10-01 20:40:33.660687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.637 [2024-10-01 20:40:33.660712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:38.637 [2024-10-01 20:40:33.664927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.637 [2024-10-01 20:40:33.665207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.637 [2024-10-01 20:40:33.665232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:38.637 [2024-10-01 20:40:33.669476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.637 [2024-10-01 20:40:33.669747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.637 [2024-10-01 20:40:33.669772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:38.637 [2024-10-01 20:40:33.674019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.637 [2024-10-01 20:40:33.674309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.637 [2024-10-01 20:40:33.674334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:38.637 [2024-10-01 20:40:33.678617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.637 [2024-10-01 20:40:33.678866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.637 [2024-10-01 20:40:33.678890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:38.637 [2024-10-01 20:40:33.683088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.637 [2024-10-01 20:40:33.683364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.637 [2024-10-01 20:40:33.683389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:38.637 [2024-10-01 20:40:33.687576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.637 [2024-10-01 20:40:33.687870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.637 [2024-10-01 20:40:33.687894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:38.638 [2024-10-01 20:40:33.691925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.638 [2024-10-01 20:40:33.692208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.638 [2024-10-01 20:40:33.692232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:38.638 [2024-10-01 20:40:33.696393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.638 [2024-10-01 20:40:33.696655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.638 [2024-10-01 20:40:33.696679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:38.638 [2024-10-01 20:40:33.700921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.638 [2024-10-01 20:40:33.701195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.638 [2024-10-01 20:40:33.701215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:38.638 [2024-10-01 20:40:33.705342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.638 [2024-10-01 20:40:33.705670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.638 [2024-10-01 20:40:33.705695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:38.638 [2024-10-01 20:40:33.709863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.638 [2024-10-01 20:40:33.710111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.638 [2024-10-01 20:40:33.710130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:38.638 [2024-10-01 20:40:33.714357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.638 [2024-10-01 20:40:33.714606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.638 [2024-10-01 20:40:33.714629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:38.638 [2024-10-01 20:40:33.718824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.638 [2024-10-01 20:40:33.719078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.638 [2024-10-01 20:40:33.719103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:38.638 [2024-10-01 20:40:33.723294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.638 [2024-10-01 20:40:33.723557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.638 [2024-10-01 20:40:33.723581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:38.638 [2024-10-01 20:40:33.727667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.638 [2024-10-01 20:40:33.727956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.638 [2024-10-01 20:40:33.727981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:38.638 [2024-10-01 20:40:33.732061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.638 [2024-10-01 20:40:33.732346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.638 [2024-10-01 20:40:33.732370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:38.638 [2024-10-01 20:40:33.736412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.638 [2024-10-01 20:40:33.736660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.638 [2024-10-01 20:40:33.736684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:38.638 [2024-10-01 20:40:33.740754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.638 [2024-10-01 20:40:33.741003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.638 [2024-10-01 20:40:33.741028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:38.638 [2024-10-01 20:40:33.745189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.638 [2024-10-01 20:40:33.745443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.638 [2024-10-01 20:40:33.745467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:38.638 [2024-10-01 20:40:33.749687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.638 [2024-10-01 20:40:33.749935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.638 [2024-10-01 20:40:33.749959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:38.638 [2024-10-01 20:40:33.754187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.638 [2024-10-01 20:40:33.754435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.638 [2024-10-01 20:40:33.754458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:38.638 [2024-10-01 20:40:33.758592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.638 [2024-10-01 20:40:33.758838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.638 [2024-10-01 20:40:33.758862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:38.638 [2024-10-01 20:40:33.763004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.638 [2024-10-01 20:40:33.763311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.638 [2024-10-01 20:40:33.763336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:38.638 [2024-10-01 20:40:33.767459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.638 [2024-10-01 20:40:33.767720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.638 [2024-10-01 20:40:33.767744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:38.638 [2024-10-01 20:40:33.772053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.638 [2024-10-01 20:40:33.772357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.638 [2024-10-01 20:40:33.772381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:38.638 [2024-10-01 20:40:33.776590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.638 [2024-10-01 20:40:33.776838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.638 [2024-10-01 20:40:33.776862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:38.639 [2024-10-01 20:40:33.780966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.639 [2024-10-01 20:40:33.781243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.639 [2024-10-01 20:40:33.781267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:38.639 [2024-10-01 20:40:33.785392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.639 [2024-10-01 20:40:33.785639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.639 [2024-10-01 20:40:33.785664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:38.639 [2024-10-01 20:40:33.789905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.639 [2024-10-01 20:40:33.790191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.639 [2024-10-01 20:40:33.790215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:38.639 [2024-10-01 20:40:33.794305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.639 [2024-10-01 20:40:33.794565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.639 [2024-10-01 20:40:33.794590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:38.639 [2024-10-01 20:40:33.798698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.639 [2024-10-01 20:40:33.798947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.639 [2024-10-01 20:40:33.798971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:38.639 [2024-10-01 20:40:33.803090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.639 [2024-10-01 20:40:33.803372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.639 [2024-10-01 20:40:33.803395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:38.639 [2024-10-01 20:40:33.807405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.639 [2024-10-01 20:40:33.807672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.639 [2024-10-01 20:40:33.807696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:38.639 [2024-10-01 20:40:33.811859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.639 [2024-10-01 20:40:33.812129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.639 [2024-10-01 20:40:33.812164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:38.639 [2024-10-01 20:40:33.816427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.639 [2024-10-01 20:40:33.816674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.639 [2024-10-01 20:40:33.816698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:38.639 [2024-10-01 20:40:33.820909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.639 [2024-10-01 20:40:33.821197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.639 [2024-10-01 20:40:33.821221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:38.639 [2024-10-01 20:40:33.825328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.639 [2024-10-01 20:40:33.825575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.639 [2024-10-01 20:40:33.825599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:38.639 [2024-10-01 20:40:33.829727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.639 [2024-10-01 20:40:33.829987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.639 [2024-10-01 20:40:33.830011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:38.639 [2024-10-01 20:40:33.834193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.639 [2024-10-01 20:40:33.834441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.639 [2024-10-01 20:40:33.834464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:38.639 [2024-10-01 20:40:33.838540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.639 [2024-10-01 20:40:33.838790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.639 [2024-10-01 20:40:33.838814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:38.639 [2024-10-01 20:40:33.842947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.639 [2024-10-01 20:40:33.843236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.639 [2024-10-01 20:40:33.843260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:38.639 [2024-10-01 20:40:33.847362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.639 [2024-10-01 20:40:33.847630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.639 [2024-10-01 20:40:33.847653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:38.639 [2024-10-01 20:40:33.851794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.639 [2024-10-01 20:40:33.852092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.639 [2024-10-01 20:40:33.852127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:38.639 [2024-10-01 20:40:33.856371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.639 [2024-10-01 20:40:33.856619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.639 [2024-10-01 20:40:33.856643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:38.640 [2024-10-01 20:40:33.860839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.640 [2024-10-01 20:40:33.861087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.640 [2024-10-01 20:40:33.861119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:38.640 [2024-10-01 20:40:33.865301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.640 [2024-10-01 20:40:33.865563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.640 [2024-10-01 20:40:33.865587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:38.640 [2024-10-01 20:40:33.869717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.640 [2024-10-01 20:40:33.869965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.640 [2024-10-01 20:40:33.869989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:38.640 [2024-10-01 20:40:33.874167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.640 [2024-10-01 20:40:33.874429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.640 [2024-10-01 20:40:33.874453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:38.640 [2024-10-01 20:40:33.878540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.640 [2024-10-01 20:40:33.878787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.640 [2024-10-01 20:40:33.878811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:38.640 [2024-10-01 20:40:33.883370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.640 [2024-10-01 20:40:33.883651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.640 [2024-10-01 20:40:33.883690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:38.928 [2024-10-01 20:40:33.888735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.928 [2024-10-01 20:40:33.889112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.928 [2024-10-01 20:40:33.889147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:38.928 [2024-10-01 20:40:33.894058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.928 [2024-10-01 20:40:33.894406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.928 [2024-10-01 20:40:33.894435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:38.928 [2024-10-01 20:40:33.899348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.928 [2024-10-01 20:40:33.899766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.928 [2024-10-01 20:40:33.899793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:38.928 [2024-10-01 20:40:33.905482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.928 [2024-10-01 20:40:33.905746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.928 [2024-10-01 20:40:33.905772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:38.929 [2024-10-01 20:40:33.910169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.929 [2024-10-01 20:40:33.910445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.929 [2024-10-01 20:40:33.910470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:38.929 [2024-10-01 20:40:33.914931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.929 [2024-10-01 20:40:33.915242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.929 [2024-10-01 20:40:33.915267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:38.929 [2024-10-01 20:40:33.919451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.929 [2024-10-01 20:40:33.919717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.929 [2024-10-01 20:40:33.919741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:38.929 [2024-10-01 20:40:33.923960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.929 [2024-10-01 20:40:33.924267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.929 [2024-10-01 20:40:33.924291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:38.929 [2024-10-01 20:40:33.928472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.929 [2024-10-01 20:40:33.928720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.929 [2024-10-01 20:40:33.928744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:38.929 [2024-10-01 20:40:33.932919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.929 [2024-10-01 20:40:33.933177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.929 [2024-10-01 20:40:33.933200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:38.929 [2024-10-01 20:40:33.937463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.929 [2024-10-01 20:40:33.937724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.929 [2024-10-01 20:40:33.937747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:38.929 [2024-10-01 20:40:33.941948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.929 [2024-10-01 20:40:33.942231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.929 [2024-10-01 20:40:33.942255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:38.929 [2024-10-01 20:40:33.946482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.929 [2024-10-01 20:40:33.946744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.929 [2024-10-01 20:40:33.946768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:38.929 [2024-10-01 20:40:33.950911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.929 [2024-10-01 20:40:33.951183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.929 [2024-10-01 20:40:33.951208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:38.929 [2024-10-01 20:40:33.955411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.929 [2024-10-01 20:40:33.955678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.929 [2024-10-01 20:40:33.955702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:38.929 [2024-10-01 20:40:33.959909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.929 [2024-10-01 20:40:33.960193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.929 [2024-10-01 20:40:33.960217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:38.929 [2024-10-01 20:40:33.964541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.929 [2024-10-01 20:40:33.964831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.929 [2024-10-01 20:40:33.964855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:38.929 [2024-10-01 20:40:33.969119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.929 [2024-10-01 20:40:33.969378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.929 [2024-10-01 20:40:33.969402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:38.929 [2024-10-01 20:40:33.973557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.929 [2024-10-01 20:40:33.973806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.929 [2024-10-01 20:40:33.973830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:38.929 [2024-10-01 20:40:33.978064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.929 [2024-10-01 20:40:33.978334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.929 [2024-10-01 20:40:33.978358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:38.929 [2024-10-01 20:40:33.982496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.929 [2024-10-01 20:40:33.982745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.929 [2024-10-01 20:40:33.982768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:38.929 [2024-10-01 20:40:33.986922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.929 [2024-10-01 20:40:33.987210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.929 [2024-10-01 20:40:33.987234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:38.929 [2024-10-01 20:40:33.991435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.929 [2024-10-01 20:40:33.991704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.929 [2024-10-01 20:40:33.991728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:38.929 [2024-10-01 20:40:33.996014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.929 [2024-10-01 20:40:33.996288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.929 [2024-10-01 20:40:33.996314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:38.929 [2024-10-01 20:40:34.000606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.929 [2024-10-01 20:40:34.000872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.929 [2024-10-01 20:40:34.000897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:38.929 [2024-10-01 20:40:34.005104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.929 [2024-10-01 20:40:34.005365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.929 [2024-10-01 20:40:34.005389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:38.929 [2024-10-01 20:40:34.009388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.929 [2024-10-01 20:40:34.009637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.929 [2024-10-01 20:40:34.009660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:38.930 [2024-10-01 20:40:34.013976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.930 [2024-10-01 20:40:34.014254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.930 [2024-10-01 20:40:34.014278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:38.930 [2024-10-01 20:40:34.018523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.930 [2024-10-01 20:40:34.018784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.930 [2024-10-01 20:40:34.018808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:38.930 [2024-10-01 20:40:34.022967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.930 [2024-10-01 20:40:34.023239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.930 [2024-10-01 20:40:34.023258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:38.930 [2024-10-01 20:40:34.027297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.930 [2024-10-01 20:40:34.027545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.930 [2024-10-01 20:40:34.027569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:38.930 [2024-10-01 20:40:34.031649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.930 [2024-10-01 20:40:34.031955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.930 [2024-10-01 20:40:34.031975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:38.930 [2024-10-01 20:40:34.036172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.930 [2024-10-01 20:40:34.036434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.930 [2024-10-01 20:40:34.036472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:38.930 [2024-10-01 20:40:34.040638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.930 [2024-10-01 20:40:34.040894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.930 [2024-10-01 20:40:34.040918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:38.930 [2024-10-01 20:40:34.045168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.930 [2024-10-01 20:40:34.045415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.930 [2024-10-01 20:40:34.045438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:38.930 [2024-10-01 20:40:34.049636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.930 [2024-10-01 20:40:34.049891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.930 [2024-10-01 20:40:34.049916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:38.930 [2024-10-01 20:40:34.054576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.930 [2024-10-01 20:40:34.054844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.930 [2024-10-01 20:40:34.054869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:38.930 [2024-10-01 20:40:34.059313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.930 [2024-10-01 20:40:34.059606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.930 [2024-10-01 20:40:34.059630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:38.930 [2024-10-01 20:40:34.064129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.930 [2024-10-01 20:40:34.064472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.930 [2024-10-01 20:40:34.064498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:38.930 [2024-10-01 20:40:34.069174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.930 [2024-10-01 20:40:34.069473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.930 [2024-10-01 20:40:34.069499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:38.930 [2024-10-01 20:40:34.074196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.930 [2024-10-01 20:40:34.074486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.930 [2024-10-01 20:40:34.074526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:38.930 [2024-10-01 20:40:34.079195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.930 [2024-10-01 20:40:34.079514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.930 [2024-10-01 20:40:34.079538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:38.930 [2024-10-01 20:40:34.084273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.930 [2024-10-01 20:40:34.084546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.930 [2024-10-01 20:40:34.084570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:38.930 [2024-10-01 20:40:34.088964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.930 [2024-10-01 20:40:34.089252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.930 [2024-10-01 20:40:34.089277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:38.930 [2024-10-01 20:40:34.093699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.930 [2024-10-01 20:40:34.093953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.930 [2024-10-01 20:40:34.093978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:38.930 [2024-10-01 20:40:34.098486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.930 [2024-10-01 20:40:34.098791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.930 [2024-10-01 20:40:34.098819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:38.930 [2024-10-01 20:40:34.103450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.930 [2024-10-01 20:40:34.103742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.930 [2024-10-01 20:40:34.103768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:38.930 [2024-10-01 20:40:34.108454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.930 [2024-10-01 20:40:34.108729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.930 [2024-10-01 20:40:34.108754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:38.930 [2024-10-01 20:40:34.113286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.930 [2024-10-01 20:40:34.113645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.930 [2024-10-01 20:40:34.113673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:38.930 [2024-10-01 20:40:34.118629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.930 [2024-10-01 20:40:34.118920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.931 [2024-10-01 20:40:34.118947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:38.931 [2024-10-01 20:40:34.123778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.931 [2024-10-01 20:40:34.124109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.931 [2024-10-01 20:40:34.124145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:38.931 [2024-10-01 20:40:34.128803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.931 [2024-10-01 20:40:34.129070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.931 [2024-10-01 20:40:34.129095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:38.931 [2024-10-01 20:40:34.133734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.931 [2024-10-01 20:40:34.134020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.931 [2024-10-01 20:40:34.134046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:38.931 [2024-10-01 20:40:34.138626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.931 [2024-10-01 20:40:34.138879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.931 [2024-10-01 20:40:34.138904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:38.931 [2024-10-01 20:40:34.143444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.931 [2024-10-01 20:40:34.143731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.931 [2024-10-01 20:40:34.143755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:38.931 [2024-10-01 20:40:34.148175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.931 [2024-10-01 20:40:34.148444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.931 [2024-10-01 20:40:34.148468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:38.931 [2024-10-01 20:40:34.152768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.931 [2024-10-01 20:40:34.153067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.931 [2024-10-01 20:40:34.153093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:38.931 [2024-10-01 20:40:34.157484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.931 [2024-10-01 20:40:34.157807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.931 [2024-10-01 20:40:34.157833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:38.931 [2024-10-01 20:40:34.163256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:38.931 [2024-10-01 20:40:34.163639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:38.931 [2024-10-01 20:40:34.163667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.198 [2024-10-01 20:40:34.168878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.198 [2024-10-01 20:40:34.169236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.198 [2024-10-01 20:40:34.169265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.198 [2024-10-01 20:40:34.174284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.198 [2024-10-01 20:40:34.174573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.198 [2024-10-01 20:40:34.174600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.198 [2024-10-01 20:40:34.179009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.198 [2024-10-01 20:40:34.179304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.198 [2024-10-01 20:40:34.179330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.198 [2024-10-01 20:40:34.183666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.198 [2024-10-01 20:40:34.183991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.198 [2024-10-01 20:40:34.184017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.198 [2024-10-01 20:40:34.188969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.198 [2024-10-01 20:40:34.189351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.198 [2024-10-01 20:40:34.189379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.198 [2024-10-01 20:40:34.195032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.198 [2024-10-01 20:40:34.195436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.198 [2024-10-01 20:40:34.195475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.198 [2024-10-01 20:40:34.200806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.198 [2024-10-01 20:40:34.201118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.198 [2024-10-01 20:40:34.201153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.198 [2024-10-01 20:40:34.205612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.198 [2024-10-01 20:40:34.205876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.198 [2024-10-01 20:40:34.205896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.198 [2024-10-01 20:40:34.210326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.198 [2024-10-01 20:40:34.210628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.198 [2024-10-01 20:40:34.210657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.198 [2024-10-01 20:40:34.215114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.198 [2024-10-01 20:40:34.215386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.198 [2024-10-01 20:40:34.215411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.199 [2024-10-01 20:40:34.219747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.199 [2024-10-01 20:40:34.220062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.199 [2024-10-01 20:40:34.220088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.199 [2024-10-01 20:40:34.224723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.199 [2024-10-01 20:40:34.224991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.199 [2024-10-01 20:40:34.225017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.199 [2024-10-01 20:40:34.229324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.199 [2024-10-01 20:40:34.229579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.199 [2024-10-01 20:40:34.229603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.199 [2024-10-01 20:40:34.233916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.199 [2024-10-01 20:40:34.234213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.199 [2024-10-01 20:40:34.234238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.199 [2024-10-01 20:40:34.238615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.199 [2024-10-01 20:40:34.238877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.199 [2024-10-01 20:40:34.238903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.199 [2024-10-01 20:40:34.243176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.199 [2024-10-01 20:40:34.243439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.199 [2024-10-01 20:40:34.243464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.199 [2024-10-01 20:40:34.247719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.199 [2024-10-01 20:40:34.248009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.199 [2024-10-01 20:40:34.248034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.199 [2024-10-01 20:40:34.252372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.199 [2024-10-01 20:40:34.252691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.199 [2024-10-01 20:40:34.252734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.199 [2024-10-01 20:40:34.257104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.199 [2024-10-01 20:40:34.257385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.199 [2024-10-01 20:40:34.257410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.199 [2024-10-01 20:40:34.261625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.199 [2024-10-01 20:40:34.261886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.199 [2024-10-01 20:40:34.261910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.199 [2024-10-01 20:40:34.266330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.199 [2024-10-01 20:40:34.266598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.199 [2024-10-01 20:40:34.266623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.199 [2024-10-01 20:40:34.270778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.199 [2024-10-01 20:40:34.271040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.199 [2024-10-01 20:40:34.271064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.199 [2024-10-01 20:40:34.275063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.199 [2024-10-01 20:40:34.275321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.199 [2024-10-01 20:40:34.275345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.199 [2024-10-01 20:40:34.279389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.199 [2024-10-01 20:40:34.279638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.199 [2024-10-01 20:40:34.279662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.199 [2024-10-01 20:40:34.283941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.199 [2024-10-01 20:40:34.284213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.199 [2024-10-01 20:40:34.284238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.199 [2024-10-01 20:40:34.288600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.199 [2024-10-01 20:40:34.288861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.199 [2024-10-01 20:40:34.288885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.199 [2024-10-01 20:40:34.293205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.199 [2024-10-01 20:40:34.293494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.199 [2024-10-01 20:40:34.293519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.199 [2024-10-01 20:40:34.297716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.199 [2024-10-01 20:40:34.297972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.199 [2024-10-01 20:40:34.297996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.199 [2024-10-01 20:40:34.302293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.199 [2024-10-01 20:40:34.302560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.199 [2024-10-01 20:40:34.302585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.199 [2024-10-01 20:40:34.306916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.199 [2024-10-01 20:40:34.307197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.199 [2024-10-01 20:40:34.307221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.199 [2024-10-01 20:40:34.311392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.199 [2024-10-01 20:40:34.311645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.199 [2024-10-01 20:40:34.311669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.199 [2024-10-01 20:40:34.315946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.199 [2024-10-01 20:40:34.316245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.199 [2024-10-01 20:40:34.316268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.199 [2024-10-01 20:40:34.320443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.199 [2024-10-01 20:40:34.320693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.199 [2024-10-01 20:40:34.320717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.199 [2024-10-01 20:40:34.324829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.200 [2024-10-01 20:40:34.325078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.200 [2024-10-01 20:40:34.325102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.200 [2024-10-01 20:40:34.329323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.200 [2024-10-01 20:40:34.329603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.200 [2024-10-01 20:40:34.329627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.200 [2024-10-01 20:40:34.333753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.200 [2024-10-01 20:40:34.334019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.200 [2024-10-01 20:40:34.334043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.200 [2024-10-01 20:40:34.338252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.200 [2024-10-01 20:40:34.338503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.200 [2024-10-01 20:40:34.338526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.200 [2024-10-01 20:40:34.342651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.200 [2024-10-01 20:40:34.342911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.200 [2024-10-01 20:40:34.342935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.200 [2024-10-01 20:40:34.347086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.200 [2024-10-01 20:40:34.347351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.200 [2024-10-01 20:40:34.347374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.200 [2024-10-01 20:40:34.351440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.200 [2024-10-01 20:40:34.351701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.200 [2024-10-01 20:40:34.351724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.200 [2024-10-01 20:40:34.355841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.200 [2024-10-01 20:40:34.356114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.200 [2024-10-01 20:40:34.356161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.200 [2024-10-01 20:40:34.360485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.200 [2024-10-01 20:40:34.360753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.200 [2024-10-01 20:40:34.360778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.200 [2024-10-01 20:40:34.365153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.200 [2024-10-01 20:40:34.365441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.200 [2024-10-01 20:40:34.365465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.200 [2024-10-01 20:40:34.369737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.200 [2024-10-01 20:40:34.369991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.200 [2024-10-01 20:40:34.370015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.200 [2024-10-01 20:40:34.374325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.200 [2024-10-01 20:40:34.374578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.200 [2024-10-01 20:40:34.374602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.200 [2024-10-01 20:40:34.378831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.200 [2024-10-01 20:40:34.379097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.200 [2024-10-01 20:40:34.379129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.200 [2024-10-01 20:40:34.383369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.200 [2024-10-01 20:40:34.383624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.200 [2024-10-01 20:40:34.383649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.200 [2024-10-01 20:40:34.387850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.200 [2024-10-01 20:40:34.388118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.200 [2024-10-01 20:40:34.388166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.200 [2024-10-01 20:40:34.392397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.200 [2024-10-01 20:40:34.392646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.200 [2024-10-01 20:40:34.392671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.200 [2024-10-01 20:40:34.396812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.200 [2024-10-01 20:40:34.397061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.200 [2024-10-01 20:40:34.397085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.200 [2024-10-01 20:40:34.401380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.200 [2024-10-01 20:40:34.401649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.200 [2024-10-01 20:40:34.401673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.200 [2024-10-01 20:40:34.405911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.200 [2024-10-01 20:40:34.406172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.200 [2024-10-01 20:40:34.406196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.200 [2024-10-01 20:40:34.410373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.200 [2024-10-01 20:40:34.410665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.200 [2024-10-01 20:40:34.410689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.200 [2024-10-01 20:40:34.414937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.200 [2024-10-01 20:40:34.415217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.200 [2024-10-01 20:40:34.415241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.200 [2024-10-01 20:40:34.419359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.200 [2024-10-01 20:40:34.419621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.200 [2024-10-01 20:40:34.419647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.200 [2024-10-01 20:40:34.423733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.200 [2024-10-01 20:40:34.424017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.200 [2024-10-01 20:40:34.424041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.201 [2024-10-01 20:40:34.428158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.201 [2024-10-01 20:40:34.428421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.201 [2024-10-01 20:40:34.428445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.201 [2024-10-01 20:40:34.432646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.201 [2024-10-01 20:40:34.432907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.201 [2024-10-01 20:40:34.432931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.201 [2024-10-01 20:40:34.437133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.201 [2024-10-01 20:40:34.437398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.201 [2024-10-01 20:40:34.437423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.201 [2024-10-01 20:40:34.441590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.201 [2024-10-01 20:40:34.441837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.201 [2024-10-01 20:40:34.441861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.201 6743.00 IOPS, 842.88 MiB/s [2024-10-01 20:40:34.447451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.201 [2024-10-01 20:40:34.447755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.201 [2024-10-01 20:40:34.447780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.461 [2024-10-01 20:40:34.452380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.461 [2024-10-01 20:40:34.452629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.461 [2024-10-01 20:40:34.452648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.461 [2024-10-01 20:40:34.457224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.461 [2024-10-01 20:40:34.457555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.461 [2024-10-01 20:40:34.457580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.461 [2024-10-01 20:40:34.461829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.461 [2024-10-01 20:40:34.462095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.461 [2024-10-01 20:40:34.462143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.461 [2024-10-01 20:40:34.466404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.461 [2024-10-01 20:40:34.466675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.461 [2024-10-01 20:40:34.466699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.462 [2024-10-01 20:40:34.470809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.462 [2024-10-01 20:40:34.471069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.462 [2024-10-01 20:40:34.471093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.462 [2024-10-01 20:40:34.475308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.462 [2024-10-01 20:40:34.475569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.462 [2024-10-01 20:40:34.475593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.462 [2024-10-01 20:40:34.479722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.462 [2024-10-01 20:40:34.480010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.462 [2024-10-01 20:40:34.480034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.462 [2024-10-01 20:40:34.484258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.462 [2024-10-01 20:40:34.484519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.462 [2024-10-01 20:40:34.484543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.462 [2024-10-01 20:40:34.488696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.462 [2024-10-01 20:40:34.488957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.462 [2024-10-01 20:40:34.488981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.462 [2024-10-01 20:40:34.493190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.462 [2024-10-01 20:40:34.493451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.462 [2024-10-01 20:40:34.493474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.462 [2024-10-01 20:40:34.497569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.462 [2024-10-01 20:40:34.497827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.462 [2024-10-01 20:40:34.497851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.462 [2024-10-01 20:40:34.501965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.462 [2024-10-01 20:40:34.502241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.462 [2024-10-01 20:40:34.502265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.462 [2024-10-01 20:40:34.506455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.462 [2024-10-01 20:40:34.506731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.462 [2024-10-01 20:40:34.506755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.462 [2024-10-01 20:40:34.510839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.462 [2024-10-01 20:40:34.511108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.462 [2024-10-01 20:40:34.511142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.462 [2024-10-01 20:40:34.515353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.462 [2024-10-01 20:40:34.515612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.462 [2024-10-01 20:40:34.515637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.462 [2024-10-01 20:40:34.519748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.462 [2024-10-01 20:40:34.520054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.462 [2024-10-01 20:40:34.520079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.462 [2024-10-01 20:40:34.524447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.462 [2024-10-01 20:40:34.524713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.462 [2024-10-01 20:40:34.524739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.462 [2024-10-01 20:40:34.528926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.462 [2024-10-01 20:40:34.529201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.462 [2024-10-01 20:40:34.529225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.462 [2024-10-01 20:40:34.533307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.462 [2024-10-01 20:40:34.533556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.462 [2024-10-01 20:40:34.533580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.462 [2024-10-01 20:40:34.537679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.462 [2024-10-01 20:40:34.537939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.462 [2024-10-01 20:40:34.537964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.462 [2024-10-01 20:40:34.542074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.462 [2024-10-01 20:40:34.542355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.462 [2024-10-01 20:40:34.542379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.462 [2024-10-01 20:40:34.546480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.462 [2024-10-01 20:40:34.546746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.462 [2024-10-01 20:40:34.546770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.462 [2024-10-01 20:40:34.550874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.462 [2024-10-01 20:40:34.551134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.462 [2024-10-01 20:40:34.551184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.462 [2024-10-01 20:40:34.555318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.462 [2024-10-01 20:40:34.555624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.462 [2024-10-01 20:40:34.555650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.462 [2024-10-01 20:40:34.559837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.462 [2024-10-01 20:40:34.560097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.462 [2024-10-01 20:40:34.560143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.462 [2024-10-01 20:40:34.564371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.462 [2024-10-01 20:40:34.564616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.462 [2024-10-01 20:40:34.564639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.462 [2024-10-01 20:40:34.568848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.462 [2024-10-01 20:40:34.569108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.462 [2024-10-01 20:40:34.569140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.463 [2024-10-01 20:40:34.573286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.463 [2024-10-01 20:40:34.573534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.463 [2024-10-01 20:40:34.573557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.463 [2024-10-01 20:40:34.577654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.463 [2024-10-01 20:40:34.577914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.463 [2024-10-01 20:40:34.577938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.463 [2024-10-01 20:40:34.582067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.463 [2024-10-01 20:40:34.582347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.463 [2024-10-01 20:40:34.582371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.463 [2024-10-01 20:40:34.586453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.463 [2024-10-01 20:40:34.586718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.463 [2024-10-01 20:40:34.586742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.463 [2024-10-01 20:40:34.590870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.463 [2024-10-01 20:40:34.591130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.463 [2024-10-01 20:40:34.591164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.463 [2024-10-01 20:40:34.595298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.463 [2024-10-01 20:40:34.595559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.463 [2024-10-01 20:40:34.595582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.463 [2024-10-01 20:40:34.599681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.463 [2024-10-01 20:40:34.599963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.463 [2024-10-01 20:40:34.599988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.463 [2024-10-01 20:40:34.604223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.463 [2024-10-01 20:40:34.604485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.463 [2024-10-01 20:40:34.604507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.463 [2024-10-01 20:40:34.608590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.463 [2024-10-01 20:40:34.608838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.463 [2024-10-01 20:40:34.608862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.463 [2024-10-01 20:40:34.612980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.463 [2024-10-01 20:40:34.613252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.463 [2024-10-01 20:40:34.613274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.463 [2024-10-01 20:40:34.617489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.463 [2024-10-01 20:40:34.617738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.463 [2024-10-01 20:40:34.617763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.463 [2024-10-01 20:40:34.621990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.463 [2024-10-01 20:40:34.622270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.463 [2024-10-01 20:40:34.622290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.463 [2024-10-01 20:40:34.626431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.463 [2024-10-01 20:40:34.626730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.463 [2024-10-01 20:40:34.626771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.463 [2024-10-01 20:40:34.630854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.463 [2024-10-01 20:40:34.631102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.463 [2024-10-01 20:40:34.631136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.463 [2024-10-01 20:40:34.635259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.463 [2024-10-01 20:40:34.635521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.463 [2024-10-01 20:40:34.635545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.463 [2024-10-01 20:40:34.639613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.463 [2024-10-01 20:40:34.639888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.463 [2024-10-01 20:40:34.639912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.463 [2024-10-01 20:40:34.644056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.463 [2024-10-01 20:40:34.644364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.463 [2024-10-01 20:40:34.644388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.463 [2024-10-01 20:40:34.649172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.463 [2024-10-01 20:40:34.649434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.463 [2024-10-01 20:40:34.649460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.463 [2024-10-01 20:40:34.653915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.463 [2024-10-01 20:40:34.654243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.463 [2024-10-01 20:40:34.654267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.463 [2024-10-01 20:40:34.658510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.463 [2024-10-01 20:40:34.658792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.463 [2024-10-01 20:40:34.658817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.463 [2024-10-01 20:40:34.663125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.463 [2024-10-01 20:40:34.663392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.463 [2024-10-01 20:40:34.663416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.463 [2024-10-01 20:40:34.667667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.463 [2024-10-01 20:40:34.667963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.463 [2024-10-01 20:40:34.667989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.463 [2024-10-01 20:40:34.672328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.463 [2024-10-01 20:40:34.672582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.463 [2024-10-01 20:40:34.672605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.463 [2024-10-01 20:40:34.676862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.464 [2024-10-01 20:40:34.677123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.464 [2024-10-01 20:40:34.677171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.464 [2024-10-01 20:40:34.681270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.464 [2024-10-01 20:40:34.681517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.464 [2024-10-01 20:40:34.681540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.464 [2024-10-01 20:40:34.685654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.464 [2024-10-01 20:40:34.685902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.464 [2024-10-01 20:40:34.685927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.464 [2024-10-01 20:40:34.690052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.464 [2024-10-01 20:40:34.690332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.464 [2024-10-01 20:40:34.690356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.464 [2024-10-01 20:40:34.694442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.464 [2024-10-01 20:40:34.694707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.464 [2024-10-01 20:40:34.694731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.464 [2024-10-01 20:40:34.698958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.464 [2024-10-01 20:40:34.699249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.464 [2024-10-01 20:40:34.699272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.464 [2024-10-01 20:40:34.703380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.464 [2024-10-01 20:40:34.703630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.464 [2024-10-01 20:40:34.703653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.464 [2024-10-01 20:40:34.707791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.464 [2024-10-01 20:40:34.708081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.464 [2024-10-01 20:40:34.708131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.725 [2024-10-01 20:40:34.713036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.725 [2024-10-01 20:40:34.713335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.725 [2024-10-01 20:40:34.713359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.725 [2024-10-01 20:40:34.717671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.725 [2024-10-01 20:40:34.717931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.725 [2024-10-01 20:40:34.717954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.725 [2024-10-01 20:40:34.722488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.725 [2024-10-01 20:40:34.722778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.725 [2024-10-01 20:40:34.722803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.725 [2024-10-01 20:40:34.727130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.725 [2024-10-01 20:40:34.727399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.725 [2024-10-01 20:40:34.727422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.725 [2024-10-01 20:40:34.731622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.725 [2024-10-01 20:40:34.731921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.725 [2024-10-01 20:40:34.731946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.725 [2024-10-01 20:40:34.736256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.725 [2024-10-01 20:40:34.736511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.725 [2024-10-01 20:40:34.736535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.725 [2024-10-01 20:40:34.740645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.725 [2024-10-01 20:40:34.740907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.725 [2024-10-01 20:40:34.740931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.725 [2024-10-01 20:40:34.745052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.725 [2024-10-01 20:40:34.745324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.725 [2024-10-01 20:40:34.745347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.725 [2024-10-01 20:40:34.749422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.725 [2024-10-01 20:40:34.749669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.725 [2024-10-01 20:40:34.749692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.725 [2024-10-01 20:40:34.753874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.725 [2024-10-01 20:40:34.754161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.725 [2024-10-01 20:40:34.754195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.725 [2024-10-01 20:40:34.758343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.725 [2024-10-01 20:40:34.758625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.725 [2024-10-01 20:40:34.758649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.725 [2024-10-01 20:40:34.762790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.725 [2024-10-01 20:40:34.763040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.725 [2024-10-01 20:40:34.763064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.725 [2024-10-01 20:40:34.767216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.725 [2024-10-01 20:40:34.767466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.725 [2024-10-01 20:40:34.767489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.725 [2024-10-01 20:40:34.771573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.725 [2024-10-01 20:40:34.771846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.725 [2024-10-01 20:40:34.771871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.725 [2024-10-01 20:40:34.776009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.725 [2024-10-01 20:40:34.776289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.725 [2024-10-01 20:40:34.776313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.725 [2024-10-01 20:40:34.780548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.725 [2024-10-01 20:40:34.780808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.725 [2024-10-01 20:40:34.780831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.725 [2024-10-01 20:40:34.784987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.725 [2024-10-01 20:40:34.785264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.725 [2024-10-01 20:40:34.785290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.725 [2024-10-01 20:40:34.789392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.725 [2024-10-01 20:40:34.789641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.725 [2024-10-01 20:40:34.789664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.726 [2024-10-01 20:40:34.793869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.726 [2024-10-01 20:40:34.794173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.726 [2024-10-01 20:40:34.794198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.726 [2024-10-01 20:40:34.799620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.726 [2024-10-01 20:40:34.799904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.726 [2024-10-01 20:40:34.799925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.726 [2024-10-01 20:40:34.804927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.726 [2024-10-01 20:40:34.805245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.726 [2024-10-01 20:40:34.805281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.726 [2024-10-01 20:40:34.809742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.726 [2024-10-01 20:40:34.810059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.726 [2024-10-01 20:40:34.810086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.726 [2024-10-01 20:40:34.814271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.726 [2024-10-01 20:40:34.814603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.726 [2024-10-01 20:40:34.814654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.726 [2024-10-01 20:40:34.818784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.726 [2024-10-01 20:40:34.819034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.726 [2024-10-01 20:40:34.819073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.726 [2024-10-01 20:40:34.823262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.726 [2024-10-01 20:40:34.823513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.726 [2024-10-01 20:40:34.823537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.726 [2024-10-01 20:40:34.827657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.726 [2024-10-01 20:40:34.827947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.726 [2024-10-01 20:40:34.827972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.726 [2024-10-01 20:40:34.832088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.726 [2024-10-01 20:40:34.832382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.726 [2024-10-01 20:40:34.832406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.726 [2024-10-01 20:40:34.836565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.726 [2024-10-01 20:40:34.836827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.726 [2024-10-01 20:40:34.836851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.726 [2024-10-01 20:40:34.840972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.726 [2024-10-01 20:40:34.841230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.726 [2024-10-01 20:40:34.841254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.726 [2024-10-01 20:40:34.845408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.726 [2024-10-01 20:40:34.845670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.726 [2024-10-01 20:40:34.845694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.726 [2024-10-01 20:40:34.849827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.726 [2024-10-01 20:40:34.850074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.726 [2024-10-01 20:40:34.850098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.726 [2024-10-01 20:40:34.854321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.726 [2024-10-01 20:40:34.854590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.726 [2024-10-01 20:40:34.854614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.726 [2024-10-01 20:40:34.858772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.726 [2024-10-01 20:40:34.859032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.726 [2024-10-01 20:40:34.859056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.726 [2024-10-01 20:40:34.863112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.726 [2024-10-01 20:40:34.863371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.726 [2024-10-01 20:40:34.863394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.726 [2024-10-01 20:40:34.867561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.726 [2024-10-01 20:40:34.867850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.726 [2024-10-01 20:40:34.867875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.726 [2024-10-01 20:40:34.872299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.726 [2024-10-01 20:40:34.872554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.726 [2024-10-01 20:40:34.872578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.726 [2024-10-01 20:40:34.876682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.726 [2024-10-01 20:40:34.876930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.726 [2024-10-01 20:40:34.876954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.726 [2024-10-01 20:40:34.881176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.726 [2024-10-01 20:40:34.881443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.726 [2024-10-01 20:40:34.881467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.726 [2024-10-01 20:40:34.885596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.726 [2024-10-01 20:40:34.885845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.726 [2024-10-01 20:40:34.885869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.726 [2024-10-01 20:40:34.890015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.726 [2024-10-01 20:40:34.890293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.726 [2024-10-01 20:40:34.890317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.726 [2024-10-01 20:40:34.894468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.726 [2024-10-01 20:40:34.894732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.726 [2024-10-01 20:40:34.894756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.727 [2024-10-01 20:40:34.898862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.727 [2024-10-01 20:40:34.899124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.727 [2024-10-01 20:40:34.899158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.727 [2024-10-01 20:40:34.903412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.727 [2024-10-01 20:40:34.903675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.727 [2024-10-01 20:40:34.903725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.727 [2024-10-01 20:40:34.907850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.727 [2024-10-01 20:40:34.908114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.727 [2024-10-01 20:40:34.908134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.727 [2024-10-01 20:40:34.912373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.727 [2024-10-01 20:40:34.912682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.727 [2024-10-01 20:40:34.912707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.727 [2024-10-01 20:40:34.917001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.727 [2024-10-01 20:40:34.917271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.727 [2024-10-01 20:40:34.917295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.727 [2024-10-01 20:40:34.921402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.727 [2024-10-01 20:40:34.921668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.727 [2024-10-01 20:40:34.921718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.727 [2024-10-01 20:40:34.925826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.727 [2024-10-01 20:40:34.926073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.727 [2024-10-01 20:40:34.926092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.727 [2024-10-01 20:40:34.930199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.727 [2024-10-01 20:40:34.930505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.727 [2024-10-01 20:40:34.930544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.727 [2024-10-01 20:40:34.934759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.727 [2024-10-01 20:40:34.935020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.727 [2024-10-01 20:40:34.935044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.727 [2024-10-01 20:40:34.939193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.727 [2024-10-01 20:40:34.939485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.727 [2024-10-01 20:40:34.939525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.727 [2024-10-01 20:40:34.943627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.727 [2024-10-01 20:40:34.943918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.727 [2024-10-01 20:40:34.943941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.727 [2024-10-01 20:40:34.948105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.727 [2024-10-01 20:40:34.948384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.727 [2024-10-01 20:40:34.948407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.727 [2024-10-01 20:40:34.952525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.727 [2024-10-01 20:40:34.952789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.727 [2024-10-01 20:40:34.952839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.727 [2024-10-01 20:40:34.957046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.727 [2024-10-01 20:40:34.957303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.727 [2024-10-01 20:40:34.957327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.727 [2024-10-01 20:40:34.961463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.727 [2024-10-01 20:40:34.961710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.727 [2024-10-01 20:40:34.961728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.727 [2024-10-01 20:40:34.965970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.727 [2024-10-01 20:40:34.966268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.727 [2024-10-01 20:40:34.966320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.727 [2024-10-01 20:40:34.970458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.727 [2024-10-01 20:40:34.970733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.727 [2024-10-01 20:40:34.970758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.727 [2024-10-01 20:40:34.975439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.727 [2024-10-01 20:40:34.975710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.727 [2024-10-01 20:40:34.975734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.988 [2024-10-01 20:40:34.980205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.988 [2024-10-01 20:40:34.980454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.988 [2024-10-01 20:40:34.980478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.988 [2024-10-01 20:40:34.984930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.988 [2024-10-01 20:40:34.985203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.988 [2024-10-01 20:40:34.985255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.988 [2024-10-01 20:40:34.989395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.988 [2024-10-01 20:40:34.989644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.988 [2024-10-01 20:40:34.989663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.988 [2024-10-01 20:40:34.993825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.988 [2024-10-01 20:40:34.994074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.988 [2024-10-01 20:40:34.994099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.988 [2024-10-01 20:40:34.998285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.988 [2024-10-01 20:40:34.998557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.988 [2024-10-01 20:40:34.998580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.989 [2024-10-01 20:40:35.002799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.989 [2024-10-01 20:40:35.003047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.989 [2024-10-01 20:40:35.003066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.989 [2024-10-01 20:40:35.007279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.989 [2024-10-01 20:40:35.007544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.989 [2024-10-01 20:40:35.007595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.989 [2024-10-01 20:40:35.011858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.989 [2024-10-01 20:40:35.012141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.989 [2024-10-01 20:40:35.012191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.989 [2024-10-01 20:40:35.016408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.989 [2024-10-01 20:40:35.016675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.989 [2024-10-01 20:40:35.016700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.989 [2024-10-01 20:40:35.020915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.989 [2024-10-01 20:40:35.021191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.989 [2024-10-01 20:40:35.021226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.989 [2024-10-01 20:40:35.025284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.989 [2024-10-01 20:40:35.025546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.989 [2024-10-01 20:40:35.025570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.989 [2024-10-01 20:40:35.029662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.989 [2024-10-01 20:40:35.029924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.989 [2024-10-01 20:40:35.029948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.989 [2024-10-01 20:40:35.034201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.989 [2024-10-01 20:40:35.034492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.989 [2024-10-01 20:40:35.034531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.989 [2024-10-01 20:40:35.038689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.989 [2024-10-01 20:40:35.038940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.989 [2024-10-01 20:40:35.038964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.989 [2024-10-01 20:40:35.043203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.989 [2024-10-01 20:40:35.043462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.989 [2024-10-01 20:40:35.043486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.989 [2024-10-01 20:40:35.047611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.989 [2024-10-01 20:40:35.047883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.989 [2024-10-01 20:40:35.047902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.989 [2024-10-01 20:40:35.052099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.989 [2024-10-01 20:40:35.052423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.989 [2024-10-01 20:40:35.052448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.989 [2024-10-01 20:40:35.056633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.989 [2024-10-01 20:40:35.056883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.989 [2024-10-01 20:40:35.056907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.989 [2024-10-01 20:40:35.060995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.989 [2024-10-01 20:40:35.061285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.989 [2024-10-01 20:40:35.061308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.989 [2024-10-01 20:40:35.065430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.989 [2024-10-01 20:40:35.065677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.989 [2024-10-01 20:40:35.065701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.989 [2024-10-01 20:40:35.069825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.989 [2024-10-01 20:40:35.070073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.989 [2024-10-01 20:40:35.070097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.989 [2024-10-01 20:40:35.074241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.989 [2024-10-01 20:40:35.074524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.989 [2024-10-01 20:40:35.074548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.989 [2024-10-01 20:40:35.078699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.989 [2024-10-01 20:40:35.078960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.989 [2024-10-01 20:40:35.078984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.989 [2024-10-01 20:40:35.083128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.989 [2024-10-01 20:40:35.083400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.989 [2024-10-01 20:40:35.083423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.989 [2024-10-01 20:40:35.087486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.989 [2024-10-01 20:40:35.087747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.989 [2024-10-01 20:40:35.087770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.989 [2024-10-01 20:40:35.091923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.989 [2024-10-01 20:40:35.092211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.989 [2024-10-01 20:40:35.092235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.989 [2024-10-01 20:40:35.096351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.989 [2024-10-01 20:40:35.096601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.989 [2024-10-01 20:40:35.096625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.989 [2024-10-01 20:40:35.100766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.989 [2024-10-01 20:40:35.101026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.989 [2024-10-01 20:40:35.101050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.990 [2024-10-01 20:40:35.105321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.990 [2024-10-01 20:40:35.105571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.990 [2024-10-01 20:40:35.105594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.990 [2024-10-01 20:40:35.109684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.990 [2024-10-01 20:40:35.109946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.990 [2024-10-01 20:40:35.109970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.990 [2024-10-01 20:40:35.114082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.990 [2024-10-01 20:40:35.114369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.990 [2024-10-01 20:40:35.114394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.990 [2024-10-01 20:40:35.118630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.990 [2024-10-01 20:40:35.118891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.990 [2024-10-01 20:40:35.118915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.990 [2024-10-01 20:40:35.123299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.990 [2024-10-01 20:40:35.123571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.990 [2024-10-01 20:40:35.123596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.990 [2024-10-01 20:40:35.127972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.990 [2024-10-01 20:40:35.128288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.990 [2024-10-01 20:40:35.128314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.990 [2024-10-01 20:40:35.132960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.990 [2024-10-01 20:40:35.133272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.990 [2024-10-01 20:40:35.133297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.990 [2024-10-01 20:40:35.138035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.990 [2024-10-01 20:40:35.138392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.990 [2024-10-01 20:40:35.138421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.990 [2024-10-01 20:40:35.143537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.990 [2024-10-01 20:40:35.143886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.990 [2024-10-01 20:40:35.143913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.990 [2024-10-01 20:40:35.148545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.990 [2024-10-01 20:40:35.148794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.990 [2024-10-01 20:40:35.148814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.990 [2024-10-01 20:40:35.153407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.990 [2024-10-01 20:40:35.153743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.990 [2024-10-01 20:40:35.153769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.990 [2024-10-01 20:40:35.158432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.990 [2024-10-01 20:40:35.158726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.990 [2024-10-01 20:40:35.158750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.990 [2024-10-01 20:40:35.163205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.990 [2024-10-01 20:40:35.163500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.990 [2024-10-01 20:40:35.163554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.990 [2024-10-01 20:40:35.168005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.990 [2024-10-01 20:40:35.168339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.990 [2024-10-01 20:40:35.168364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.990 [2024-10-01 20:40:35.172612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.990 [2024-10-01 20:40:35.172876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.990 [2024-10-01 20:40:35.172927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.990 [2024-10-01 20:40:35.177111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.990 [2024-10-01 20:40:35.177387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.990 [2024-10-01 20:40:35.177411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.990 [2024-10-01 20:40:35.181709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.990 [2024-10-01 20:40:35.181969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.990 [2024-10-01 20:40:35.181994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.990 [2024-10-01 20:40:35.186286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.990 [2024-10-01 20:40:35.186549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.990 [2024-10-01 20:40:35.186573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.990 [2024-10-01 20:40:35.190638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.990 [2024-10-01 20:40:35.190900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.990 [2024-10-01 20:40:35.190925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.990 [2024-10-01 20:40:35.195044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.990 [2024-10-01 20:40:35.195322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.990 [2024-10-01 20:40:35.195345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.990 [2024-10-01 20:40:35.199520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.990 [2024-10-01 20:40:35.199776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.990 [2024-10-01 20:40:35.199800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.990 [2024-10-01 20:40:35.203933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.990 [2024-10-01 20:40:35.204211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.990 [2024-10-01 20:40:35.204235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.990 [2024-10-01 20:40:35.208358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.990 [2024-10-01 20:40:35.208626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.990 [2024-10-01 20:40:35.208651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.990 [2024-10-01 20:40:35.212774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.991 [2024-10-01 20:40:35.213023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.991 [2024-10-01 20:40:35.213041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.991 [2024-10-01 20:40:35.217379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.991 [2024-10-01 20:40:35.217671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.991 [2024-10-01 20:40:35.217712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:39.991 [2024-10-01 20:40:35.221898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.991 [2024-10-01 20:40:35.222174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.991 [2024-10-01 20:40:35.222193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:39.991 [2024-10-01 20:40:35.226337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.991 [2024-10-01 20:40:35.226602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.991 [2024-10-01 20:40:35.226652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:39.991 [2024-10-01 20:40:35.230708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.991 [2024-10-01 20:40:35.230969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.991 [2024-10-01 20:40:35.230993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:39.991 [2024-10-01 20:40:35.235273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:39.991 [2024-10-01 20:40:35.235563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:39.991 [2024-10-01 20:40:35.235588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:40.251 [2024-10-01 20:40:35.240088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.251 [2024-10-01 20:40:35.240410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.251 [2024-10-01 20:40:35.240435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:40.251 [2024-10-01 20:40:35.244718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.251 [2024-10-01 20:40:35.244978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.251 [2024-10-01 20:40:35.245002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:40.251 [2024-10-01 20:40:35.249500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.251 [2024-10-01 20:40:35.249748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.251 [2024-10-01 20:40:35.249773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:40.251 [2024-10-01 20:40:35.253958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.251 [2024-10-01 20:40:35.254268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.251 [2024-10-01 20:40:35.254292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:40.252 [2024-10-01 20:40:35.258809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.252 [2024-10-01 20:40:35.259077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.252 [2024-10-01 20:40:35.259103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:40.252 [2024-10-01 20:40:35.263564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.252 [2024-10-01 20:40:35.263859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.252 [2024-10-01 20:40:35.263884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:40.252 [2024-10-01 20:40:35.268311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.252 [2024-10-01 20:40:35.268582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.252 [2024-10-01 20:40:35.268606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:40.252 [2024-10-01 20:40:35.273340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.252 [2024-10-01 20:40:35.273638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.252 [2024-10-01 20:40:35.273664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:40.252 [2024-10-01 20:40:35.278366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.252 [2024-10-01 20:40:35.278690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.252 [2024-10-01 20:40:35.278715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:40.252 [2024-10-01 20:40:35.283549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.252 [2024-10-01 20:40:35.283830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.252 [2024-10-01 20:40:35.283872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:40.252 [2024-10-01 20:40:35.288369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.252 [2024-10-01 20:40:35.288657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.252 [2024-10-01 20:40:35.288680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:40.252 [2024-10-01 20:40:35.293227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.252 [2024-10-01 20:40:35.293514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.252 [2024-10-01 20:40:35.293538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:40.252 [2024-10-01 20:40:35.298008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.252 [2024-10-01 20:40:35.298314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.252 [2024-10-01 20:40:35.298339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:40.252 [2024-10-01 20:40:35.302891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.252 [2024-10-01 20:40:35.303191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.252 [2024-10-01 20:40:35.303235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:40.252 [2024-10-01 20:40:35.307489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.252 [2024-10-01 20:40:35.307751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.252 [2024-10-01 20:40:35.307775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:40.252 [2024-10-01 20:40:35.312074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.252 [2024-10-01 20:40:35.312345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.252 [2024-10-01 20:40:35.312370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:40.252 [2024-10-01 20:40:35.316729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.252 [2024-10-01 20:40:35.317015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.252 [2024-10-01 20:40:35.317040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:40.252 [2024-10-01 20:40:35.321456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.252 [2024-10-01 20:40:35.321762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.252 [2024-10-01 20:40:35.321787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:40.252 [2024-10-01 20:40:35.326099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.252 [2024-10-01 20:40:35.326379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.252 [2024-10-01 20:40:35.326403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:40.252 [2024-10-01 20:40:35.330677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.252 [2024-10-01 20:40:35.330945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.252 [2024-10-01 20:40:35.330970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:40.252 [2024-10-01 20:40:35.335172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.252 [2024-10-01 20:40:35.335435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.252 [2024-10-01 20:40:35.335459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:40.252 [2024-10-01 20:40:35.339738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.252 [2024-10-01 20:40:35.340041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.252 [2024-10-01 20:40:35.340067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:40.252 [2024-10-01 20:40:35.344305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.252 [2024-10-01 20:40:35.344577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.252 [2024-10-01 20:40:35.344601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:40.252 [2024-10-01 20:40:35.349006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.252 [2024-10-01 20:40:35.349288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.252 [2024-10-01 20:40:35.349312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:40.252 [2024-10-01 20:40:35.353531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.252 [2024-10-01 20:40:35.353785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.252 [2024-10-01 20:40:35.353810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:40.252 [2024-10-01 20:40:35.358258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.252 [2024-10-01 20:40:35.358547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.252 [2024-10-01 20:40:35.358572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:40.252 [2024-10-01 20:40:35.362837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.252 [2024-10-01 20:40:35.363090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.252 [2024-10-01 20:40:35.363123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:40.253 [2024-10-01 20:40:35.367353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.253 [2024-10-01 20:40:35.367622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.253 [2024-10-01 20:40:35.367646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:40.253 [2024-10-01 20:40:35.371857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.253 [2024-10-01 20:40:35.372125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.253 [2024-10-01 20:40:35.372174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:40.253 [2024-10-01 20:40:35.376450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.253 [2024-10-01 20:40:35.376727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.253 [2024-10-01 20:40:35.376747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:40.253 [2024-10-01 20:40:35.380964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.253 [2024-10-01 20:40:35.381248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.253 [2024-10-01 20:40:35.381299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:40.253 [2024-10-01 20:40:35.385486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.253 [2024-10-01 20:40:35.385741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.253 [2024-10-01 20:40:35.385766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:40.253 [2024-10-01 20:40:35.390053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.253 [2024-10-01 20:40:35.390319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.253 [2024-10-01 20:40:35.390343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:40.253 [2024-10-01 20:40:35.394719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.253 [2024-10-01 20:40:35.395024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.253 [2024-10-01 20:40:35.395050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:40.253 [2024-10-01 20:40:35.399388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.253 [2024-10-01 20:40:35.399645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.253 [2024-10-01 20:40:35.399670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:40.253 [2024-10-01 20:40:35.403910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.253 [2024-10-01 20:40:35.404183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.253 [2024-10-01 20:40:35.404203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:40.253 [2024-10-01 20:40:35.408452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.253 [2024-10-01 20:40:35.408772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.253 [2024-10-01 20:40:35.408799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:40.253 [2024-10-01 20:40:35.413168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.253 [2024-10-01 20:40:35.413438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.253 [2024-10-01 20:40:35.413464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:40.253 [2024-10-01 20:40:35.417825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.253 [2024-10-01 20:40:35.418095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.253 [2024-10-01 20:40:35.418137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:40.253 [2024-10-01 20:40:35.422461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.253 [2024-10-01 20:40:35.422718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.253 [2024-10-01 20:40:35.422744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:40.253 [2024-10-01 20:40:35.427198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.253 [2024-10-01 20:40:35.427476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.253 [2024-10-01 20:40:35.427501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:40.253 [2024-10-01 20:40:35.431778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.253 [2024-10-01 20:40:35.432064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.253 [2024-10-01 20:40:35.432090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:40.253 [2024-10-01 20:40:35.436377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.253 [2024-10-01 20:40:35.436634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.253 [2024-10-01 20:40:35.436659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:40.253 [2024-10-01 20:40:35.441011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.253 [2024-10-01 20:40:35.441296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.253 [2024-10-01 20:40:35.441323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:40.253 6769.00 IOPS, 846.12 MiB/s [2024-10-01 20:40:35.446344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19b7b20) with pdu=0x20002887df90 00:41:40.253 [2024-10-01 20:40:35.446524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.253 [2024-10-01 20:40:35.446551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:40.253 00:41:40.253 Latency(us) 00:41:40.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:40.253 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:41:40.253 nvme0n1 : 2.00 6766.46 845.81 0.00 0.00 2359.16 1861.82 6106.76 00:41:40.253 =================================================================================================================== 00:41:40.253 Total : 6766.46 845.81 0.00 0.00 2359.16 1861.82 6106.76 00:41:40.253 { 00:41:40.253 "results": [ 00:41:40.253 { 00:41:40.253 "job": "nvme0n1", 00:41:40.253 "core_mask": "0x2", 00:41:40.253 "workload": "randwrite", 00:41:40.253 "status": "finished", 00:41:40.253 "queue_depth": 16, 00:41:40.253 "io_size": 131072, 00:41:40.253 "runtime": 2.003114, 00:41:40.253 "iops": 6766.464614595076, 00:41:40.253 "mibps": 845.8080768243844, 00:41:40.253 "io_failed": 0, 00:41:40.253 "io_timeout": 0, 00:41:40.253 "avg_latency_us": 2359.1609996378124, 00:41:40.253 "min_latency_us": 1861.8181818181818, 00:41:40.253 "max_latency_us": 6106.763636363637 00:41:40.253 } 00:41:40.253 ], 00:41:40.253 "core_count": 1 00:41:40.253 } 00:41:40.253 20:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:41:40.253 20:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:41:40.253 20:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:41:40.253 20:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:41:40.253 | .driver_specific 00:41:40.253 | .nvme_error 00:41:40.254 | .status_code 00:41:40.254 | .command_transient_transport_error' 00:41:40.820 20:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 437 > 0 )) 00:41:40.820 20:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94198 00:41:40.820 20:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 94198 ']' 00:41:40.820 20:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 94198 00:41:40.820 20:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:41:40.820 20:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:40.820 20:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94198 00:41:40.820 20:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:41:40.820 20:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:41:40.820 killing process with pid 94198 00:41:40.820 20:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94198' 00:41:40.820 20:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 94198 00:41:40.820 Received shutdown signal, test time was about 2.000000 seconds 00:41:40.820 00:41:40.820 Latency(us) 00:41:40.820 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:40.820 =================================================================================================================== 00:41:40.820 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:40.820 20:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 94198 00:41:40.820 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 93901 00:41:40.820 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 93901 ']' 00:41:40.820 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 93901 00:41:40.820 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:41:40.820 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:40.820 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93901 00:41:40.820 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:40.820 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:40.820 killing process with pid 93901 00:41:40.820 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93901' 00:41:40.820 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 93901 00:41:40.820 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 93901 00:41:41.079 00:41:41.079 real 0m17.209s 00:41:41.079 user 0m33.116s 00:41:41.079 sys 0m4.268s 00:41:41.079 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:41.079 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:41:41.079 ************************************ 00:41:41.079 END TEST nvmf_digest_error 00:41:41.079 ************************************ 00:41:41.079 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:41:41.079 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:41:41.079 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # nvmfcleanup 00:41:41.080 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:41:41.338 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:41.338 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:41:41.338 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:41.338 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:41.338 rmmod nvme_tcp 00:41:41.338 rmmod nvme_fabrics 00:41:41.338 rmmod nvme_keyring 00:41:41.338 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:41.338 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:41:41.338 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:41:41.338 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@515 -- # '[' -n 93901 ']' 00:41:41.338 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # killprocess 93901 00:41:41.338 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 93901 ']' 00:41:41.338 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 93901 00:41:41.338 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (93901) - No such process 00:41:41.338 Process with pid 93901 is not found 00:41:41.338 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 93901 is not found' 00:41:41.338 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:41:41.338 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:41:41.338 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:41:41.338 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:41:41.338 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-save 00:41:41.339 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:41:41.339 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-restore 00:41:41.339 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:41.339 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:41:41.339 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:41:41.339 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:41:41.339 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:41:41.339 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:41:41.339 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:41:41.339 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:41:41.339 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:41:41.339 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:41:41.339 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:41:41.339 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:41:41.339 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:41:41.598 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:41:41.598 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:41:41.598 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:41:41.598 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:41.598 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:41.598 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:41.598 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:41:41.598 00:41:41.598 real 0m35.808s 00:41:41.598 user 1m7.834s 00:41:41.598 sys 0m8.892s 00:41:41.598 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:41.598 20:40:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:41:41.598 ************************************ 00:41:41.598 END TEST nvmf_digest 00:41:41.598 ************************************ 00:41:41.598 20:40:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 1 -eq 1 ]] 00:41:41.598 20:40:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ tcp == \t\c\p ]] 00:41:41.598 20:40:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@38 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:41:41.598 20:40:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:41:41.598 20:40:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:41.598 20:40:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:41:41.598 ************************************ 00:41:41.598 START TEST nvmf_mdns_discovery 00:41:41.598 ************************************ 00:41:41.598 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:41:41.598 * Looking for test storage... 00:41:41.598 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:41:41.598 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:41.598 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:41:41.598 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@344 -- # case "$op" in 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@345 -- # : 1 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # decimal 1 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=1 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 1 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # decimal 2 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=2 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 2 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # return 0 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:41.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:41.859 --rc genhtml_branch_coverage=1 00:41:41.859 --rc genhtml_function_coverage=1 00:41:41.859 --rc genhtml_legend=1 00:41:41.859 --rc geninfo_all_blocks=1 00:41:41.859 --rc geninfo_unexecuted_blocks=1 00:41:41.859 00:41:41.859 ' 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:41.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:41.859 --rc genhtml_branch_coverage=1 00:41:41.859 --rc genhtml_function_coverage=1 00:41:41.859 --rc genhtml_legend=1 00:41:41.859 --rc geninfo_all_blocks=1 00:41:41.859 --rc geninfo_unexecuted_blocks=1 00:41:41.859 00:41:41.859 ' 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:41.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:41.859 --rc genhtml_branch_coverage=1 00:41:41.859 --rc genhtml_function_coverage=1 00:41:41.859 --rc genhtml_legend=1 00:41:41.859 --rc geninfo_all_blocks=1 00:41:41.859 --rc geninfo_unexecuted_blocks=1 00:41:41.859 00:41:41.859 ' 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:41.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:41.859 --rc genhtml_branch_coverage=1 00:41:41.859 --rc genhtml_function_coverage=1 00:41:41.859 --rc genhtml_legend=1 00:41:41.859 --rc geninfo_all_blocks=1 00:41:41.859 --rc geninfo_unexecuted_blocks=1 00:41:41.859 00:41:41.859 ' 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:41.859 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # : 0 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:41.860 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@458 -- # nvmf_veth_init 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:41:41.860 Cannot find device "nvmf_init_br" 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:41:41.860 Cannot find device "nvmf_init_br2" 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:41:41.860 Cannot find device "nvmf_tgt_br" 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # true 00:41:41.860 20:40:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:41:41.860 Cannot find device "nvmf_tgt_br2" 00:41:41.860 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # true 00:41:41.860 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:41:41.860 Cannot find device "nvmf_init_br" 00:41:41.860 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # true 00:41:41.860 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:41:41.860 Cannot find device "nvmf_init_br2" 00:41:41.860 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # true 00:41:41.860 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:41:41.860 Cannot find device "nvmf_tgt_br" 00:41:41.860 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # true 00:41:41.860 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:41:41.860 Cannot find device "nvmf_tgt_br2" 00:41:41.860 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # true 00:41:41.861 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:41:41.861 Cannot find device "nvmf_br" 00:41:41.861 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # true 00:41:41.861 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:41:41.861 Cannot find device "nvmf_init_if" 00:41:41.861 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # true 00:41:41.861 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:41:41.861 Cannot find device "nvmf_init_if2" 00:41:41.861 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # true 00:41:41.861 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:41:41.861 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:41.861 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # true 00:41:41.861 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:41:41.861 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:41.861 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # true 00:41:41.861 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:41:41.861 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:41:41.861 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:41:42.121 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:41:42.121 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:41:42.121 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:41:42.121 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:41:42.121 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:41:42.121 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:41:42.121 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:41:42.121 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:41:42.121 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:41:42.121 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:41:42.121 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:41:42.121 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:41:42.121 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:41:42.121 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:41:42.121 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:41:42.121 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:41:42.121 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:41:42.121 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:41:42.121 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:41:42.121 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:41:42.121 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:41:42.121 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:41:42.121 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:41:42.121 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:41:42.121 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:41:42.121 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:41:42.121 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:41:42.121 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:41:42.121 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:41:42.121 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:41:42.121 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:41:42.121 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:41:42.121 00:41:42.121 --- 10.0.0.3 ping statistics --- 00:41:42.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:42.121 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:41:42.121 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:41:42.121 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:41:42.121 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:41:42.121 00:41:42.121 --- 10.0.0.4 ping statistics --- 00:41:42.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:42.121 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:41:42.121 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:41:42.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:42.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:41:42.121 00:41:42.121 --- 10.0.0.1 ping statistics --- 00:41:42.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:42.121 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:41:42.121 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:41:42.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:42.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:41:42.121 00:41:42.121 --- 10.0.0.2 ping statistics --- 00:41:42.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:42.121 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:41:42.381 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:42.381 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@459 -- # return 0 00:41:42.381 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:41:42.381 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:42.381 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:41:42.381 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:41:42.381 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:42.381 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:41:42.381 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:41:42.381 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:41:42.381 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:41:42.381 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:42.381 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:42.381 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@507 -- # nvmfpid=94530 00:41:42.381 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@508 -- # waitforlisten 94530 00:41:42.381 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:41:42.381 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@831 -- # '[' -z 94530 ']' 00:41:42.381 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:42.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:42.381 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:42.381 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:42.381 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:42.381 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:42.381 [2024-10-01 20:40:37.462558] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:41:42.381 [2024-10-01 20:40:37.462647] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:42.381 [2024-10-01 20:40:37.602024] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:42.641 [2024-10-01 20:40:37.673490] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:42.641 [2024-10-01 20:40:37.673555] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:42.641 [2024-10-01 20:40:37.673569] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:42.641 [2024-10-01 20:40:37.673579] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:42.641 [2024-10-01 20:40:37.673588] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:42.641 [2024-10-01 20:40:37.673622] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:41:42.641 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:42.641 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # return 0 00:41:42.641 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:41:42.641 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:42.641 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:42.641 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:42.641 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:41:42.641 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:42.641 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:42.641 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:42.641 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:41:42.641 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:42.641 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:42.641 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:42.641 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:42.641 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:42.641 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:42.641 [2024-10-01 20:40:37.888614] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:42.641 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:42.642 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:41:42.901 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:42.901 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:42.901 [2024-10-01 20:40:37.900701] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:41:42.901 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:42.901 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:41:42.901 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:42.901 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:42.901 null0 00:41:42.901 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:42.901 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:41:42.901 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:42.901 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:42.901 null1 00:41:42.901 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:42.901 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:41:42.901 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:42.901 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:42.901 null2 00:41:42.901 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:42.901 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:41:42.901 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:42.901 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:42.901 null3 00:41:42.901 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:42.901 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:41:42.901 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:42.901 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:42.901 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:42.901 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=94566 00:41:42.901 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 94566 /tmp/host.sock 00:41:42.901 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@831 -- # '[' -z 94566 ']' 00:41:42.901 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:41:42.901 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:41:42.901 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:42.901 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:41:42.901 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:41:42.901 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:42.901 20:40:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:42.901 [2024-10-01 20:40:38.004552] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:41:42.901 [2024-10-01 20:40:38.004650] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94566 ] 00:41:42.902 [2024-10-01 20:40:38.142832] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:43.161 [2024-10-01 20:40:38.211835] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:41:43.162 20:40:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:43.162 20:40:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # return 0 00:41:43.162 20:40:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:41:43.162 20:40:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:41:43.162 20:40:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:41:43.420 20:40:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=94583 00:41:43.420 20:40:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:41:43.420 20:40:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:41:43.421 20:40:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:41:43.421 Process 1072 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:41:43.421 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:41:43.421 Successfully dropped root privileges. 00:41:43.421 avahi-daemon 0.8 starting up. 00:41:43.421 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:41:43.421 Successfully called chroot(). 00:41:43.421 Successfully dropped remaining capabilities. 00:41:44.355 No service file found in /etc/avahi/services. 00:41:44.355 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 00:41:44.355 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:41:44.355 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 00:41:44.355 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:41:44.355 Network interface enumeration completed. 00:41:44.355 Registering new address record for fe80::6084:d4ff:fe9b:2260 on nvmf_tgt_if2.*. 00:41:44.355 Registering new address record for 10.0.0.4 on nvmf_tgt_if2.IPv4. 00:41:44.355 Registering new address record for fe80::3c47:c4ff:feac:c7a5 on nvmf_tgt_if.*. 00:41:44.355 Registering new address record for 10.0.0.3 on nvmf_tgt_if.IPv4. 00:41:44.356 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 2504839852. 00:41:44.356 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:41:44.356 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:44.356 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:44.356 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:44.356 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:41:44.356 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:44.356 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:44.356 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:44.356 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@114 -- # notify_id=0 00:41:44.356 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # get_subsystem_names 00:41:44.356 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:41:44.356 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:41:44.356 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:41:44.356 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:44.356 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:44.356 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:41:44.356 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:44.356 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # [[ '' == '' ]] 00:41:44.356 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # get_bdev_list 00:41:44.356 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:44.356 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:41:44.356 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:44.356 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:44.356 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:41:44.356 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:41:44.356 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:44.356 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # [[ '' == '' ]] 00:41:44.356 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@123 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:41:44.356 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:44.356 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:44.356 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:44.356 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # get_subsystem_names 00:41:44.356 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:41:44.356 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:44.356 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:44.356 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:41:44.356 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:41:44.356 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:41:44.614 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:44.614 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # [[ '' == '' ]] 00:41:44.614 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # get_bdev_list 00:41:44.614 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # [[ '' == '' ]] 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_subsystem_names 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ '' == '' ]] 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_bdev_list 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:44.615 [2024-10-01 20:40:39.779698] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ '' == '' ]] 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:44.615 [2024-10-01 20:40:39.825156] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@140 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@145 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_publish_mdns_prr 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:44.615 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:44.874 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:44.874 20:40:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 5 00:41:45.442 [2024-10-01 20:40:40.679701] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:41:46.011 [2024-10-01 20:40:41.079709] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:41:46.011 [2024-10-01 20:40:41.079731] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:41:46.011 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:41:46.011 cookie is 0 00:41:46.011 is_local: 1 00:41:46.011 our_own: 0 00:41:46.011 wide_area: 0 00:41:46.011 multicast: 1 00:41:46.011 cached: 1 00:41:46.011 [2024-10-01 20:40:41.179700] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:41:46.011 [2024-10-01 20:40:41.179720] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:41:46.011 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:41:46.011 cookie is 0 00:41:46.011 is_local: 1 00:41:46.011 our_own: 0 00:41:46.011 wide_area: 0 00:41:46.011 multicast: 1 00:41:46.011 cached: 1 00:41:46.949 [2024-10-01 20:40:42.080454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:46.949 [2024-10-01 20:40:42.080511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74ab0 with addr=10.0.0.4, port=8009 00:41:46.949 [2024-10-01 20:40:42.080538] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:41:46.949 [2024-10-01 20:40:42.080550] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:41:46.949 [2024-10-01 20:40:42.080570] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:41:46.949 [2024-10-01 20:40:42.184170] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:41:46.949 [2024-10-01 20:40:42.184211] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:41:46.949 [2024-10-01 20:40:42.184244] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:41:47.208 [2024-10-01 20:40:42.270314] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem mdns1_nvme0 00:41:47.208 [2024-10-01 20:40:42.326510] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:41:47.208 [2024-10-01 20:40:42.326536] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:41:48.144 [2024-10-01 20:40:43.080365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:48.144 [2024-10-01 20:40:43.080418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde1e30 with addr=10.0.0.4, port=8009 00:41:48.144 [2024-10-01 20:40:43.080434] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:41:48.144 [2024-10-01 20:40:43.080442] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:41:48.144 [2024-10-01 20:40:43.080450] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:41:49.080 [2024-10-01 20:40:44.080352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:49.080 [2024-10-01 20:40:44.080403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca8f50 with addr=10.0.0.4, port=8009 00:41:49.080 [2024-10-01 20:40:44.080417] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:41:49.080 [2024-10-01 20:40:44.080425] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:41:49.080 [2024-10-01 20:40:44.080432] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:41:49.649 20:40:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 'not found' 00:41:49.649 20:40:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:41:49.649 20:40:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:41:49.649 20:40:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:41:49.649 20:40:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:41:49.649 20:40:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:41:49.649 20:40:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:41:49.908 20:40:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:41:49.908 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:41:49.908 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:41:49.908 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:41:49.908 20:40:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:41:49.908 20:40:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:41:49.908 20:40:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:41:49.908 20:40:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:41:49.908 20:40:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:41:49.908 20:40:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:41:49.908 20:40:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:41:49.908 20:40:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:41:49.908 20:40:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:41:49.908 20:40:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:41:49.908 20:40:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:41:49.908 20:40:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.4 -s 8009 00:41:49.908 20:40:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:49.908 20:40:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:49.908 [2024-10-01 20:40:44.911244] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 8009 *** 00:41:49.908 [2024-10-01 20:40:44.913299] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:41:49.908 [2024-10-01 20:40:44.913333] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:41:49.908 20:40:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:49.908 20:40:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:41:49.908 20:40:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:49.908 20:40:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:49.908 [2024-10-01 20:40:44.919181] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:41:49.908 [2024-10-01 20:40:44.920294] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:41:49.908 20:40:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:49.908 20:40:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@157 -- # sleep 1 00:41:49.908 [2024-10-01 20:40:45.051412] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:41:49.908 [2024-10-01 20:40:45.051456] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:41:49.908 [2024-10-01 20:40:45.085104] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:41:49.908 [2024-10-01 20:40:45.085148] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:41:49.909 [2024-10-01 20:40:45.085164] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:41:49.909 [2024-10-01 20:40:45.137262] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:41:50.176 [2024-10-01 20:40:45.172296] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 new subsystem mdns0_nvme0 00:41:50.176 [2024-10-01 20:40:45.228185] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:41:50.176 [2024-10-01 20:40:45.228211] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:41:50.745 20:40:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 found 00:41:50.745 20:40:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:41:50.745 20:40:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:41:50.745 20:40:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:41:50.745 20:40:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:41:50.745 20:40:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:41:50.745 20:40:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:41:50.745 20:40:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:41:50.745 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:41:50.745 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:41:50.745 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:41:50.745 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:41:50.745 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:41:50.745 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:41:50.745 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:41:50.745 20:40:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:41:50.745 20:40:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:41:50.745 20:40:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:41:50.745 20:40:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:41:50.745 20:40:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:41:50.745 20:40:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:41:50.745 20:40:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:41:50.745 20:40:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:41:50.745 20:40:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:41:50.745 20:40:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:41:50.745 20:40:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:41:50.745 20:40:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:41:50.745 20:40:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:41:50.745 20:40:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\4* ]] 00:41:50.745 20:40:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:41:50.745 20:40:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:41:50.745 20:40:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:41:50.745 20:40:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # get_mdns_discovery_svcs 00:41:50.745 20:40:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:41:50.745 20:40:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:50.745 20:40:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:50.745 20:40:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:41:50.745 20:40:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:41:50.745 20:40:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:41:50.745 20:40:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:51.005 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # [[ mdns == \m\d\n\s ]] 00:41:51.005 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # get_discovery_ctrlrs 00:41:51.005 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:41:51.005 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:51.005 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:51.005 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:41:51.005 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:41:51.005 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:41:51.005 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:51.005 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:41:51.005 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:41:51.005 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:41:51.005 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:41:51.005 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:41:51.005 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:41:51.005 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:51.005 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:51.005 [2024-10-01 20:40:46.079706] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:41:51.005 [2024-10-01 20:40:46.079731] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:41:51.005 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:41:51.005 cookie is 0 00:41:51.005 is_local: 1 00:41:51.005 our_own: 0 00:41:51.005 wide_area: 0 00:41:51.005 multicast: 1 00:41:51.005 cached: 1 00:41:51.005 [2024-10-01 20:40:46.079758] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:41:51.005 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:51.005 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:41:51.005 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:41:51.005 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:51.005 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:41:51.005 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:51.005 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:41:51.005 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:51.005 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:41:51.005 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:51.005 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:41:51.005 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:41:51.006 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:41:51.006 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:41:51.006 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:41:51.006 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:41:51.006 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:51.006 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:51.006 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:51.006 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4420 == \4\4\2\0 ]] 00:41:51.006 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:41:51.006 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:41:51.006 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:51.006 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:41:51.006 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:51.006 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:41:51.006 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:41:51.006 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:51.265 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4420 == \4\4\2\0 ]] 00:41:51.265 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:41:51.265 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:41:51.265 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:51.265 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:51.265 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:41:51.265 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:51.265 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:41:51.265 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=2 00:41:51.265 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 2 == 2 ]] 00:41:51.265 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:41:51.265 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:51.265 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:51.265 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:51.265 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@173 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:41:51.265 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:51.265 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:51.265 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:51.265 20:40:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # sleep 1 00:41:51.265 [2024-10-01 20:40:46.379724] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:41:51.265 [2024-10-01 20:40:46.379748] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:41:51.265 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:41:51.265 cookie is 0 00:41:51.265 is_local: 1 00:41:51.265 our_own: 0 00:41:51.265 wide_area: 0 00:41:51.265 multicast: 1 00:41:51.265 cached: 1 00:41:51.265 [2024-10-01 20:40:46.379774] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:41:52.202 20:40:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:41:52.202 20:40:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:52.202 20:40:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:41:52.202 20:40:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:52.202 20:40:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:41:52.202 20:40:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:41:52.202 20:40:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:52.202 20:40:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:52.202 20:40:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:41:52.202 20:40:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:41:52.202 20:40:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:41:52.202 20:40:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:41:52.202 20:40:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:52.202 20:40:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:52.202 20:40:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:52.461 20:40:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:41:52.461 20:40:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:41:52.462 20:40:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 2 == 2 ]] 00:41:52.462 20:40:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:41:52.462 20:40:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:52.462 20:40:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:52.462 [2024-10-01 20:40:47.460466] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:41:52.462 [2024-10-01 20:40:47.461036] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:41:52.462 [2024-10-01 20:40:47.461062] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:41:52.462 [2024-10-01 20:40:47.461094] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:41:52.462 [2024-10-01 20:40:47.461107] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:41:52.462 20:40:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:52.462 20:40:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4421 00:41:52.462 20:40:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:52.462 20:40:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:52.462 [2024-10-01 20:40:47.468406] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4421 *** 00:41:52.462 [2024-10-01 20:40:47.469049] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:41:52.462 [2024-10-01 20:40:47.469187] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:41:52.462 20:40:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:52.462 20:40:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@184 -- # sleep 1 00:41:52.462 [2024-10-01 20:40:47.601261] bdev_nvme.c:7086:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for mdns1_nvme0 00:41:52.462 [2024-10-01 20:40:47.601605] bdev_nvme.c:7086:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new path for mdns0_nvme0 00:41:52.462 [2024-10-01 20:40:47.659650] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:41:52.462 [2024-10-01 20:40:47.659674] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:41:52.462 [2024-10-01 20:40:47.659696] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:41:52.462 [2024-10-01 20:40:47.659713] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:41:52.462 [2024-10-01 20:40:47.659950] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:41:52.462 [2024-10-01 20:40:47.659976] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:41:52.462 [2024-10-01 20:40:47.659981] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:41:52.462 [2024-10-01 20:40:47.659995] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:41:52.462 [2024-10-01 20:40:47.705344] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:41:52.462 [2024-10-01 20:40:47.705363] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:41:52.462 [2024-10-01 20:40:47.705414] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:41:52.462 [2024-10-01 20:40:47.705422] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:41:53.398 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_subsystem_names 00:41:53.398 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:41:53.398 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:41:53.398 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:53.398 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:53.398 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:41:53.398 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:41:53.398 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:53.398 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:41:53.398 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:41:53.398 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:41:53.398 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:53.398 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:41:53.398 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:53.398 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:53.398 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:41:53.398 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:53.398 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:41:53.398 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # get_subsystem_paths mdns0_nvme0 00:41:53.398 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:41:53.398 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:53.398 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:41:53.398 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:53.398 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:41:53.398 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:41:53.398 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:53.659 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:41:53.659 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # get_subsystem_paths mdns1_nvme0 00:41:53.659 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:41:53.659 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:41:53.659 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:53.659 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:53.659 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:41:53.659 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:41:53.659 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:53.659 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:41:53.659 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # get_notification_count 00:41:53.659 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:41:53.659 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:53.659 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:53.659 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:41:53.659 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:53.659 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:41:53.659 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:41:53.659 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ 0 == 0 ]] 00:41:53.659 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:41:53.659 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:53.659 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:53.659 [2024-10-01 20:40:48.781924] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:41:53.659 [2024-10-01 20:40:48.781974] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:41:53.659 [2024-10-01 20:40:48.782008] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:41:53.659 [2024-10-01 20:40:48.782021] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:41:53.659 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:53.659 [2024-10-01 20:40:48.785489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:41:53.659 [2024-10-01 20:40:48.785544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:53.659 [2024-10-01 20:40:48.785555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:41:53.659 [2024-10-01 20:40:48.785564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:53.659 [2024-10-01 20:40:48.785573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:41:53.659 [2024-10-01 20:40:48.785581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:53.659 [2024-10-01 20:40:48.785589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:41:53.660 [2024-10-01 20:40:48.785597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:53.660 [2024-10-01 20:40:48.785606] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc85970 is same with the state(6) to be set 00:41:53.660 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@196 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:41:53.660 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:53.660 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:53.660 [2024-10-01 20:40:48.789929] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:41:53.660 [2024-10-01 20:40:48.790002] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:41:53.660 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:53.660 20:40:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # sleep 1 00:41:53.660 [2024-10-01 20:40:48.795433] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc85970 (9): Bad file descriptor 00:41:53.660 [2024-10-01 20:40:48.798549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:41:53.660 [2024-10-01 20:40:48.798574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:53.660 [2024-10-01 20:40:48.798584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:41:53.660 [2024-10-01 20:40:48.798593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:53.660 [2024-10-01 20:40:48.798601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:41:53.660 [2024-10-01 20:40:48.798609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:53.660 [2024-10-01 20:40:48.798618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:41:53.660 [2024-10-01 20:40:48.798625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:53.660 [2024-10-01 20:40:48.798633] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc93770 is same with the state(6) to be set 00:41:53.660 [2024-10-01 20:40:48.805452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:41:53.660 [2024-10-01 20:40:48.805611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:53.660 [2024-10-01 20:40:48.805631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc85970 with addr=10.0.0.3, port=4420 00:41:53.660 [2024-10-01 20:40:48.805640] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc85970 is same with the state(6) to be set 00:41:53.660 [2024-10-01 20:40:48.805655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc85970 (9): Bad file descriptor 00:41:53.660 [2024-10-01 20:40:48.805668] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:41:53.660 [2024-10-01 20:40:48.805675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:41:53.660 [2024-10-01 20:40:48.805683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:41:53.660 [2024-10-01 20:40:48.805701] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:53.660 [2024-10-01 20:40:48.808516] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc93770 (9): Bad file descriptor 00:41:53.660 [2024-10-01 20:40:48.815543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:41:53.660 [2024-10-01 20:40:48.815628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:53.660 [2024-10-01 20:40:48.815645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc85970 with addr=10.0.0.3, port=4420 00:41:53.660 [2024-10-01 20:40:48.815655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc85970 is same with the state(6) to be set 00:41:53.660 [2024-10-01 20:40:48.815669] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc85970 (9): Bad file descriptor 00:41:53.660 [2024-10-01 20:40:48.815681] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:41:53.660 [2024-10-01 20:40:48.815688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:41:53.660 [2024-10-01 20:40:48.815695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:41:53.660 [2024-10-01 20:40:48.815707] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:53.660 [2024-10-01 20:40:48.818525] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:41:53.660 [2024-10-01 20:40:48.818609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:53.660 [2024-10-01 20:40:48.818626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc93770 with addr=10.0.0.4, port=4420 00:41:53.660 [2024-10-01 20:40:48.818636] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc93770 is same with the state(6) to be set 00:41:53.660 [2024-10-01 20:40:48.818649] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc93770 (9): Bad file descriptor 00:41:53.660 [2024-10-01 20:40:48.818660] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:41:53.660 [2024-10-01 20:40:48.818668] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:41:53.660 [2024-10-01 20:40:48.818676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:41:53.660 [2024-10-01 20:40:48.818688] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:53.660 [2024-10-01 20:40:48.825600] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:41:53.660 [2024-10-01 20:40:48.825682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:53.660 [2024-10-01 20:40:48.825699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc85970 with addr=10.0.0.3, port=4420 00:41:53.660 [2024-10-01 20:40:48.825708] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc85970 is same with the state(6) to be set 00:41:53.660 [2024-10-01 20:40:48.825720] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc85970 (9): Bad file descriptor 00:41:53.660 [2024-10-01 20:40:48.825732] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:41:53.660 [2024-10-01 20:40:48.825739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:41:53.660 [2024-10-01 20:40:48.825747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:41:53.660 [2024-10-01 20:40:48.825758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:53.660 [2024-10-01 20:40:48.828582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:41:53.660 [2024-10-01 20:40:48.828667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:53.660 [2024-10-01 20:40:48.828684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc93770 with addr=10.0.0.4, port=4420 00:41:53.660 [2024-10-01 20:40:48.828693] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc93770 is same with the state(6) to be set 00:41:53.660 [2024-10-01 20:40:48.828706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc93770 (9): Bad file descriptor 00:41:53.660 [2024-10-01 20:40:48.828719] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:41:53.660 [2024-10-01 20:40:48.828726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:41:53.660 [2024-10-01 20:40:48.828734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:41:53.660 [2024-10-01 20:40:48.828747] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:53.660 [2024-10-01 20:40:48.835657] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:41:53.660 [2024-10-01 20:40:48.835745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:53.660 [2024-10-01 20:40:48.835763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc85970 with addr=10.0.0.3, port=4420 00:41:53.660 [2024-10-01 20:40:48.835772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc85970 is same with the state(6) to be set 00:41:53.660 [2024-10-01 20:40:48.835785] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc85970 (9): Bad file descriptor 00:41:53.660 [2024-10-01 20:40:48.835797] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:41:53.660 [2024-10-01 20:40:48.835804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:41:53.660 [2024-10-01 20:40:48.835820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:41:53.660 [2024-10-01 20:40:48.835865] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:53.660 [2024-10-01 20:40:48.838641] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:41:53.660 [2024-10-01 20:40:48.838726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:53.660 [2024-10-01 20:40:48.838743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc93770 with addr=10.0.0.4, port=4420 00:41:53.660 [2024-10-01 20:40:48.838752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc93770 is same with the state(6) to be set 00:41:53.661 [2024-10-01 20:40:48.838765] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc93770 (9): Bad file descriptor 00:41:53.661 [2024-10-01 20:40:48.838777] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:41:53.661 [2024-10-01 20:40:48.838784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:41:53.661 [2024-10-01 20:40:48.838792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:41:53.661 [2024-10-01 20:40:48.838804] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:53.661 [2024-10-01 20:40:48.845716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:41:53.661 [2024-10-01 20:40:48.845799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:53.661 [2024-10-01 20:40:48.845815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc85970 with addr=10.0.0.3, port=4420 00:41:53.661 [2024-10-01 20:40:48.845824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc85970 is same with the state(6) to be set 00:41:53.661 [2024-10-01 20:40:48.845837] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc85970 (9): Bad file descriptor 00:41:53.661 [2024-10-01 20:40:48.845864] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:41:53.661 [2024-10-01 20:40:48.845871] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:41:53.661 [2024-10-01 20:40:48.845879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:41:53.661 [2024-10-01 20:40:48.845891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:53.661 [2024-10-01 20:40:48.848699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:41:53.661 [2024-10-01 20:40:48.848780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:53.661 [2024-10-01 20:40:48.848796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc93770 with addr=10.0.0.4, port=4420 00:41:53.661 [2024-10-01 20:40:48.848805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc93770 is same with the state(6) to be set 00:41:53.661 [2024-10-01 20:40:48.848818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc93770 (9): Bad file descriptor 00:41:53.661 [2024-10-01 20:40:48.848830] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:41:53.661 [2024-10-01 20:40:48.848837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:41:53.661 [2024-10-01 20:40:48.848845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:41:53.661 [2024-10-01 20:40:48.848856] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:53.661 [2024-10-01 20:40:48.855774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:41:53.661 [2024-10-01 20:40:48.855893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:53.661 [2024-10-01 20:40:48.855912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc85970 with addr=10.0.0.3, port=4420 00:41:53.661 [2024-10-01 20:40:48.855921] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc85970 is same with the state(6) to be set 00:41:53.661 [2024-10-01 20:40:48.855935] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc85970 (9): Bad file descriptor 00:41:53.661 [2024-10-01 20:40:48.855948] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:41:53.661 [2024-10-01 20:40:48.855955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:41:53.661 [2024-10-01 20:40:48.855963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:41:53.661 [2024-10-01 20:40:48.855994] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:53.661 [2024-10-01 20:40:48.858756] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:41:53.661 [2024-10-01 20:40:48.858837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:53.661 [2024-10-01 20:40:48.858854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc93770 with addr=10.0.0.4, port=4420 00:41:53.661 [2024-10-01 20:40:48.858864] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc93770 is same with the state(6) to be set 00:41:53.661 [2024-10-01 20:40:48.858877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc93770 (9): Bad file descriptor 00:41:53.661 [2024-10-01 20:40:48.858890] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:41:53.661 [2024-10-01 20:40:48.858898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:41:53.661 [2024-10-01 20:40:48.858906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:41:53.661 [2024-10-01 20:40:48.858918] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:53.661 [2024-10-01 20:40:48.865853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:41:53.661 [2024-10-01 20:40:48.865934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:53.661 [2024-10-01 20:40:48.865951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc85970 with addr=10.0.0.3, port=4420 00:41:53.661 [2024-10-01 20:40:48.865959] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc85970 is same with the state(6) to be set 00:41:53.661 [2024-10-01 20:40:48.865972] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc85970 (9): Bad file descriptor 00:41:53.661 [2024-10-01 20:40:48.865997] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:41:53.661 [2024-10-01 20:40:48.866005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:41:53.661 [2024-10-01 20:40:48.866013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:41:53.661 [2024-10-01 20:40:48.866025] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:53.661 [2024-10-01 20:40:48.868812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:41:53.661 [2024-10-01 20:40:48.868896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:53.661 [2024-10-01 20:40:48.868912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc93770 with addr=10.0.0.4, port=4420 00:41:53.661 [2024-10-01 20:40:48.868921] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc93770 is same with the state(6) to be set 00:41:53.661 [2024-10-01 20:40:48.868934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc93770 (9): Bad file descriptor 00:41:53.661 [2024-10-01 20:40:48.868946] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:41:53.661 [2024-10-01 20:40:48.868953] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:41:53.661 [2024-10-01 20:40:48.868960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:41:53.661 [2024-10-01 20:40:48.868972] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:53.661 [2024-10-01 20:40:48.875911] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:41:53.661 [2024-10-01 20:40:48.875996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:53.661 [2024-10-01 20:40:48.876014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc85970 with addr=10.0.0.3, port=4420 00:41:53.661 [2024-10-01 20:40:48.876023] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc85970 is same with the state(6) to be set 00:41:53.661 [2024-10-01 20:40:48.876037] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc85970 (9): Bad file descriptor 00:41:53.661 [2024-10-01 20:40:48.876066] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:41:53.661 [2024-10-01 20:40:48.876075] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:41:53.661 [2024-10-01 20:40:48.876083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:41:53.661 [2024-10-01 20:40:48.876095] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:53.661 [2024-10-01 20:40:48.878868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:41:53.661 [2024-10-01 20:40:48.878956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:53.661 [2024-10-01 20:40:48.878974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc93770 with addr=10.0.0.4, port=4420 00:41:53.661 [2024-10-01 20:40:48.878983] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc93770 is same with the state(6) to be set 00:41:53.661 [2024-10-01 20:40:48.878996] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc93770 (9): Bad file descriptor 00:41:53.661 [2024-10-01 20:40:48.879008] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:41:53.661 [2024-10-01 20:40:48.879015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:41:53.661 [2024-10-01 20:40:48.879023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:41:53.661 [2024-10-01 20:40:48.879035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:53.661 [2024-10-01 20:40:48.885969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:41:53.661 [2024-10-01 20:40:48.886056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:53.662 [2024-10-01 20:40:48.886073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc85970 with addr=10.0.0.3, port=4420 00:41:53.662 [2024-10-01 20:40:48.886082] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc85970 is same with the state(6) to be set 00:41:53.662 [2024-10-01 20:40:48.886095] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc85970 (9): Bad file descriptor 00:41:53.662 [2024-10-01 20:40:48.886120] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:41:53.662 [2024-10-01 20:40:48.886157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:41:53.662 [2024-10-01 20:40:48.886166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:41:53.662 [2024-10-01 20:40:48.886179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:53.662 [2024-10-01 20:40:48.888929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:41:53.662 [2024-10-01 20:40:48.889008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:53.662 [2024-10-01 20:40:48.889025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc93770 with addr=10.0.0.4, port=4420 00:41:53.662 [2024-10-01 20:40:48.889034] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc93770 is same with the state(6) to be set 00:41:53.662 [2024-10-01 20:40:48.889046] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc93770 (9): Bad file descriptor 00:41:53.662 [2024-10-01 20:40:48.889058] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:41:53.662 [2024-10-01 20:40:48.889066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:41:53.662 [2024-10-01 20:40:48.889073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:41:53.662 [2024-10-01 20:40:48.889085] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:53.662 [2024-10-01 20:40:48.896029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:41:53.662 [2024-10-01 20:40:48.896116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:53.662 [2024-10-01 20:40:48.896158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc85970 with addr=10.0.0.3, port=4420 00:41:53.662 [2024-10-01 20:40:48.896168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc85970 is same with the state(6) to be set 00:41:53.662 [2024-10-01 20:40:48.896197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc85970 (9): Bad file descriptor 00:41:53.662 [2024-10-01 20:40:48.896224] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:41:53.662 [2024-10-01 20:40:48.896233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:41:53.662 [2024-10-01 20:40:48.896241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:41:53.662 [2024-10-01 20:40:48.896253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:53.662 [2024-10-01 20:40:48.898984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:41:53.662 [2024-10-01 20:40:48.899069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:53.662 [2024-10-01 20:40:48.899086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc93770 with addr=10.0.0.4, port=4420 00:41:53.662 [2024-10-01 20:40:48.899095] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc93770 is same with the state(6) to be set 00:41:53.662 [2024-10-01 20:40:48.899109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc93770 (9): Bad file descriptor 00:41:53.662 [2024-10-01 20:40:48.899120] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:41:53.662 [2024-10-01 20:40:48.899154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:41:53.662 [2024-10-01 20:40:48.899164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:41:53.662 [2024-10-01 20:40:48.899177] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:53.662 [2024-10-01 20:40:48.906090] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:41:53.662 [2024-10-01 20:40:48.906228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:53.662 [2024-10-01 20:40:48.906247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc85970 with addr=10.0.0.3, port=4420 00:41:53.662 [2024-10-01 20:40:48.906256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc85970 is same with the state(6) to be set 00:41:53.662 [2024-10-01 20:40:48.906270] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc85970 (9): Bad file descriptor 00:41:53.662 [2024-10-01 20:40:48.906297] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:41:53.662 [2024-10-01 20:40:48.906306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:41:53.662 [2024-10-01 20:40:48.906314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:41:53.662 [2024-10-01 20:40:48.906327] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:53.662 [2024-10-01 20:40:48.909040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:41:53.662 [2024-10-01 20:40:48.909134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:53.662 [2024-10-01 20:40:48.909161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc93770 with addr=10.0.0.4, port=4420 00:41:53.662 [2024-10-01 20:40:48.909171] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc93770 is same with the state(6) to be set 00:41:53.662 [2024-10-01 20:40:48.909184] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc93770 (9): Bad file descriptor 00:41:53.662 [2024-10-01 20:40:48.909196] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:41:53.662 [2024-10-01 20:40:48.909204] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:41:53.662 [2024-10-01 20:40:48.909211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:41:53.662 [2024-10-01 20:40:48.909224] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:53.924 [2024-10-01 20:40:48.916174] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:41:53.924 [2024-10-01 20:40:48.916268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:53.924 [2024-10-01 20:40:48.916285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc85970 with addr=10.0.0.3, port=4420 00:41:53.924 [2024-10-01 20:40:48.916294] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc85970 is same with the state(6) to be set 00:41:53.924 [2024-10-01 20:40:48.916307] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc85970 (9): Bad file descriptor 00:41:53.924 [2024-10-01 20:40:48.916333] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:41:53.924 [2024-10-01 20:40:48.916343] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:41:53.924 [2024-10-01 20:40:48.916350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:41:53.924 [2024-10-01 20:40:48.916362] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:53.924 [2024-10-01 20:40:48.919094] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:41:53.924 [2024-10-01 20:40:48.919180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:53.924 [2024-10-01 20:40:48.919198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc93770 with addr=10.0.0.4, port=4420 00:41:53.924 [2024-10-01 20:40:48.919207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc93770 is same with the state(6) to be set 00:41:53.924 [2024-10-01 20:40:48.919220] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc93770 (9): Bad file descriptor 00:41:53.924 [2024-10-01 20:40:48.919232] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:41:53.924 [2024-10-01 20:40:48.919239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:41:53.924 [2024-10-01 20:40:48.919246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:41:53.925 [2024-10-01 20:40:48.919258] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:53.925 [2024-10-01 20:40:48.921342] bdev_nvme.c:6949:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:41:53.925 [2024-10-01 20:40:48.921369] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:41:53.925 [2024-10-01 20:40:48.921387] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:41:53.925 [2024-10-01 20:40:48.921418] bdev_nvme.c:6949:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 not found 00:41:53.925 [2024-10-01 20:40:48.921432] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:41:53.925 [2024-10-01 20:40:48.921445] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:41:53.925 [2024-10-01 20:40:49.007407] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:41:53.925 [2024-10-01 20:40:49.007474] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:41:54.862 20:40:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # get_subsystem_names 00:41:54.862 20:40:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:41:54.862 20:40:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:41:54.862 20:40:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:54.862 20:40:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:41:54.862 20:40:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:54.863 20:40:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:41:54.863 20:40:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:54.863 20:40:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:41:54.863 20:40:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # get_bdev_list 00:41:54.863 20:40:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:54.863 20:40:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:54.863 20:40:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:54.863 20:40:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:41:54.863 20:40:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:41:54.863 20:40:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:41:54.863 20:40:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:54.863 20:40:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:41:54.863 20:40:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # get_subsystem_paths mdns0_nvme0 00:41:54.863 20:40:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:41:54.863 20:40:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:54.863 20:40:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:41:54.863 20:40:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:54.863 20:40:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:41:54.863 20:40:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:41:54.863 20:40:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:54.863 20:40:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # [[ 4421 == \4\4\2\1 ]] 00:41:54.863 20:40:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # get_subsystem_paths mdns1_nvme0 00:41:54.863 20:40:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:41:54.863 20:40:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:54.863 20:40:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:54.863 20:40:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:41:54.863 20:40:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:41:54.863 20:40:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:41:54.863 20:40:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:54.863 20:40:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # [[ 4421 == \4\4\2\1 ]] 00:41:54.863 20:40:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # get_notification_count 00:41:54.863 20:40:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:41:54.863 20:40:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:41:54.863 20:40:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:54.863 20:40:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:54.863 20:40:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:54.863 20:40:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:41:54.863 20:40:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:41:54.863 20:40:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # [[ 0 == 0 ]] 00:41:54.863 20:40:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@206 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:41:54.863 20:40:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:54.863 20:40:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:54.863 20:40:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:54.863 20:40:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@207 -- # sleep 1 00:41:55.122 [2024-10-01 20:40:50.179729] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:41:56.059 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # get_mdns_discovery_svcs 00:41:56.059 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:41:56.059 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:41:56.059 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:56.059 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:56.059 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:41:56.059 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:41:56.059 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:56.059 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # [[ '' == '' ]] 00:41:56.059 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # get_subsystem_names 00:41:56.059 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:41:56.059 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:41:56.059 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:41:56.059 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:56.059 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:56.059 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:41:56.059 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:56.059 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # [[ '' == '' ]] 00:41:56.059 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # get_bdev_list 00:41:56.059 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:41:56.059 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:56.059 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:56.059 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:41:56.059 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:56.059 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:41:56.059 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:56.059 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # [[ '' == '' ]] 00:41:56.059 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@212 -- # get_notification_count 00:41:56.059 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:41:56.059 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:56.059 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:56.059 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:41:56.059 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:56.059 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=4 00:41:56.059 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=8 00:41:56.059 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@213 -- # [[ 4 == 4 ]] 00:41:56.059 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@216 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:41:56.059 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:56.318 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:56.318 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:56.318 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@217 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:41:56.318 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # local es=0 00:41:56.319 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:41:56.319 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:41:56.319 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:56.319 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:41:56.319 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:56.319 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:41:56.319 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:56.319 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:41:56.319 [2024-10-01 20:40:51.325964] bdev_mdns_client.c: 471:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:41:56.319 2024/10/01 20:40:51 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:41:56.319 request: 00:41:56.319 { 00:41:56.319 "method": "bdev_nvme_start_mdns_discovery", 00:41:56.319 "params": { 00:41:56.319 "name": "mdns", 00:41:56.319 "svcname": "_nvme-disc._http", 00:41:56.319 "hostnqn": "nqn.2021-12.io.spdk:test" 00:41:56.319 } 00:41:56.319 } 00:41:56.319 Got JSON-RPC error response 00:41:56.319 GoRPCClient: error on JSON-RPC call 00:41:56.319 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:41:56.319 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@653 -- # es=1 00:41:56.319 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:41:56.319 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:41:56.319 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:41:56.319 20:40:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@218 -- # sleep 5 00:41:56.887 [2024-10-01 20:40:51.914522] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:41:56.887 [2024-10-01 20:40:52.014513] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:41:56.887 [2024-10-01 20:40:52.114520] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:41:56.887 [2024-10-01 20:40:52.114537] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:41:56.887 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:41:56.887 cookie is 0 00:41:56.887 is_local: 1 00:41:56.887 our_own: 0 00:41:56.887 wide_area: 0 00:41:56.887 multicast: 1 00:41:56.887 cached: 1 00:41:57.146 [2024-10-01 20:40:52.214518] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:41:57.146 [2024-10-01 20:40:52.214539] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:41:57.146 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:41:57.146 cookie is 0 00:41:57.146 is_local: 1 00:41:57.146 our_own: 0 00:41:57.146 wide_area: 0 00:41:57.146 multicast: 1 00:41:57.146 cached: 1 00:41:57.146 [2024-10-01 20:40:52.214564] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:41:57.146 [2024-10-01 20:40:52.314518] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:41:57.146 [2024-10-01 20:40:52.314537] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:41:57.146 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:41:57.146 cookie is 0 00:41:57.146 is_local: 1 00:41:57.146 our_own: 0 00:41:57.146 wide_area: 0 00:41:57.146 multicast: 1 00:41:57.146 cached: 1 00:41:57.404 [2024-10-01 20:40:52.414519] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:41:57.404 [2024-10-01 20:40:52.414539] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:41:57.404 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:41:57.404 cookie is 0 00:41:57.404 is_local: 1 00:41:57.404 our_own: 0 00:41:57.404 wide_area: 0 00:41:57.404 multicast: 1 00:41:57.404 cached: 1 00:41:57.404 [2024-10-01 20:40:52.414563] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:41:57.971 [2024-10-01 20:40:53.126366] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:41:57.972 [2024-10-01 20:40:53.126390] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:41:57.972 [2024-10-01 20:40:53.126423] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:41:57.972 [2024-10-01 20:40:53.212452] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new subsystem mdns0_nvme0 00:41:58.230 [2024-10-01 20:40:53.272363] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:41:58.230 [2024-10-01 20:40:53.272405] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:41:58.230 [2024-10-01 20:40:53.326239] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:41:58.230 [2024-10-01 20:40:53.326262] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:41:58.230 [2024-10-01 20:40:53.326294] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:41:58.230 [2024-10-01 20:40:53.412358] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem mdns1_nvme0 00:41:58.230 [2024-10-01 20:40:53.472045] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:41:58.230 [2024-10-01 20:40:53.472072] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:42:01.514 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # get_mdns_discovery_svcs 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # [[ mdns == \m\d\n\s ]] 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # get_discovery_ctrlrs 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # get_bdev_list 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@225 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # local es=0 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:01.515 [2024-10-01 20:40:56.522360] bdev_mdns_client.c: 476:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:42:01.515 2024/10/01 20:40:56 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:42:01.515 request: 00:42:01.515 { 00:42:01.515 "method": "bdev_nvme_start_mdns_discovery", 00:42:01.515 "params": { 00:42:01.515 "name": "cdc", 00:42:01.515 "svcname": "_nvme-disc._tcp", 00:42:01.515 "hostnqn": "nqn.2021-12.io.spdk:test" 00:42:01.515 } 00:42:01.515 } 00:42:01.515 Got JSON-RPC error response 00:42:01.515 GoRPCClient: error on JSON-RPC call 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@653 -- # es=1 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # get_discovery_ctrlrs 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # get_bdev_list 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@228 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@231 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 found 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:42:01.515 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:42:01.516 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:42:01.516 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:42:01.516 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:42:01.516 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:42:01.516 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:42:01.516 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:42:01.516 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:42:01.516 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:42:01.516 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:42:01.516 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:42:01.516 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:42:01.516 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:42:01.516 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 00:42:01.516 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:42:01.516 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:42:01.516 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:42:01.516 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:42:01.516 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 00:42:01.516 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:42:01.516 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:42:01.516 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:42:01.516 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:42:01.516 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 00:42:01.516 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:42:01.516 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:42:01.516 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:42:01.516 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:42:01.516 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 00:42:01.516 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:42:01.516 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:42:01.516 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:42:01.516 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@232 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:42:01.516 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:01.516 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:01.516 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:01.516 20:40:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@234 -- # sleep 1 00:42:01.516 [2024-10-01 20:40:56.714548] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:42:02.453 20:40:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@236 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 'not found' 00:42:02.453 20:40:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:42:02.453 20:40:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 00:42:02.453 20:40:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:42:02.453 20:40:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:42:02.453 20:40:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:42:02.453 20:40:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:42:02.734 20:40:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:42:02.734 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:42:02.734 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:42:02.734 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:42:02.734 20:40:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:42:02.734 20:40:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:42:02.734 20:40:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:42:02.734 20:40:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:42:02.734 20:40:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:42:02.734 20:40:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:42:02.734 20:40:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:42:02.734 20:40:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:42:02.734 20:40:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:42:02.734 20:40:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:42:02.734 20:40:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:42:02.734 20:40:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@238 -- # rpc_cmd nvmf_stop_mdns_prr 00:42:02.734 20:40:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:02.734 20:40:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:02.734 20:40:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:02.734 20:40:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@240 -- # trap - SIGINT SIGTERM EXIT 00:42:02.734 20:40:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@242 -- # kill 94566 00:42:02.734 20:40:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@245 -- # wait 94566 00:42:02.734 20:40:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@246 -- # kill 94583 00:42:02.734 20:40:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@247 -- # nvmftestfini 00:42:02.734 Got SIGTERM, quitting. 00:42:02.734 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 00:42:02.734 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 00:42:02.734 20:40:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:42:02.734 20:40:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # sync 00:42:02.734 avahi-daemon 0.8 exiting. 00:42:02.734 20:40:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:02.734 20:40:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set +e 00:42:02.734 20:40:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:02.734 20:40:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:02.734 rmmod nvme_tcp 00:42:02.734 rmmod nvme_fabrics 00:42:03.033 rmmod nvme_keyring 00:42:03.033 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:03.033 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@128 -- # set -e 00:42:03.033 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@129 -- # return 0 00:42:03.033 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@515 -- # '[' -n 94530 ']' 00:42:03.033 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@516 -- # killprocess 94530 00:42:03.033 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@950 -- # '[' -z 94530 ']' 00:42:03.033 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # kill -0 94530 00:42:03.033 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@955 -- # uname 00:42:03.033 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:03.033 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94530 00:42:03.033 killing process with pid 94530 00:42:03.033 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:42:03.033 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:42:03.033 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94530' 00:42:03.033 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@969 -- # kill 94530 00:42:03.033 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@974 -- # wait 94530 00:42:03.033 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:42:03.033 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:42:03.033 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:42:03.033 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@297 -- # iptr 00:42:03.033 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@789 -- # iptables-save 00:42:03.033 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:42:03.033 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:42:03.033 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:03.033 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:42:03.033 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:42:03.033 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:42:03.033 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:42:03.293 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:42:03.293 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:42:03.293 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:42:03.293 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:42:03.293 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:42:03.293 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:42:03.293 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:42:03.293 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:42:03.293 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:42:03.293 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:42:03.293 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:42:03.293 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:03.293 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:03.293 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:03.293 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@300 -- # return 0 00:42:03.293 00:42:03.293 real 0m21.734s 00:42:03.293 user 0m42.407s 00:42:03.293 sys 0m2.058s 00:42:03.293 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:03.293 ************************************ 00:42:03.293 20:40:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:42:03.293 END TEST nvmf_mdns_discovery 00:42:03.293 ************************************ 00:42:03.293 20:40:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:42:03.293 20:40:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:42:03.293 20:40:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:42:03.293 20:40:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:03.293 20:40:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:42:03.293 ************************************ 00:42:03.293 START TEST nvmf_host_multipath 00:42:03.293 ************************************ 00:42:03.293 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:42:03.553 * Looking for test storage... 00:42:03.553 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:03.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:03.553 --rc genhtml_branch_coverage=1 00:42:03.553 --rc genhtml_function_coverage=1 00:42:03.553 --rc genhtml_legend=1 00:42:03.553 --rc geninfo_all_blocks=1 00:42:03.553 --rc geninfo_unexecuted_blocks=1 00:42:03.553 00:42:03.553 ' 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:03.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:03.553 --rc genhtml_branch_coverage=1 00:42:03.553 --rc genhtml_function_coverage=1 00:42:03.553 --rc genhtml_legend=1 00:42:03.553 --rc geninfo_all_blocks=1 00:42:03.553 --rc geninfo_unexecuted_blocks=1 00:42:03.553 00:42:03.553 ' 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:03.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:03.553 --rc genhtml_branch_coverage=1 00:42:03.553 --rc genhtml_function_coverage=1 00:42:03.553 --rc genhtml_legend=1 00:42:03.553 --rc geninfo_all_blocks=1 00:42:03.553 --rc geninfo_unexecuted_blocks=1 00:42:03.553 00:42:03.553 ' 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:03.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:03.553 --rc genhtml_branch_coverage=1 00:42:03.553 --rc genhtml_function_coverage=1 00:42:03.553 --rc genhtml_legend=1 00:42:03.553 --rc geninfo_all_blocks=1 00:42:03.553 --rc geninfo_unexecuted_blocks=1 00:42:03.553 00:42:03.553 ' 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:42:03.553 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:03.554 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@458 -- # nvmf_veth_init 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:42:03.554 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:42:03.813 Cannot find device "nvmf_init_br" 00:42:03.813 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:42:03.813 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:42:03.813 Cannot find device "nvmf_init_br2" 00:42:03.813 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:42:03.813 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:42:03.813 Cannot find device "nvmf_tgt_br" 00:42:03.813 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:42:03.813 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:42:03.813 Cannot find device "nvmf_tgt_br2" 00:42:03.813 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:42:03.813 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:42:03.813 Cannot find device "nvmf_init_br" 00:42:03.813 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:42:03.813 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:42:03.813 Cannot find device "nvmf_init_br2" 00:42:03.813 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:42:03.813 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:42:03.813 Cannot find device "nvmf_tgt_br" 00:42:03.813 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:42:03.813 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:42:03.813 Cannot find device "nvmf_tgt_br2" 00:42:03.813 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:42:03.813 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:42:03.813 Cannot find device "nvmf_br" 00:42:03.813 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:42:03.813 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:42:03.813 Cannot find device "nvmf_init_if" 00:42:03.813 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:42:03.813 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:42:03.813 Cannot find device "nvmf_init_if2" 00:42:03.813 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:42:03.813 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:42:03.813 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:42:03.813 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:42:03.813 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:42:03.813 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:42:03.813 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:42:03.813 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:42:03.813 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:42:03.813 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:42:03.813 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:42:03.814 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:42:03.814 20:40:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:42:03.814 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:42:03.814 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:42:03.814 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:42:03.814 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:42:03.814 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:42:03.814 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:42:03.814 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:42:03.814 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:42:03.814 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:42:03.814 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:42:03.814 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:42:03.814 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:42:04.073 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:42:04.073 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:42:04.073 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:42:04.073 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:42:04.073 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:42:04.073 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:42:04.073 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:42:04.073 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:42:04.073 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:42:04.073 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:42:04.073 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:42:04.073 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:42:04.073 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:42:04.073 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:42:04.073 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:42:04.073 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:42:04.073 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:42:04.073 00:42:04.073 --- 10.0.0.3 ping statistics --- 00:42:04.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:04.073 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:42:04.073 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:42:04.073 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:42:04.073 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:42:04.073 00:42:04.073 --- 10.0.0.4 ping statistics --- 00:42:04.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:04.073 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:42:04.073 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:42:04.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:04.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:42:04.073 00:42:04.073 --- 10.0.0.1 ping statistics --- 00:42:04.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:04.073 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:42:04.073 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:42:04.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:04.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:42:04.073 00:42:04.073 --- 10.0.0.2 ping statistics --- 00:42:04.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:04.073 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:42:04.073 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:04.073 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # return 0 00:42:04.073 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:42:04.073 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:04.073 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:42:04.073 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:42:04.073 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:04.073 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:42:04.073 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:42:04.073 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:42:04.073 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:42:04.073 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:04.073 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:42:04.073 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # nvmfpid=95233 00:42:04.073 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:42:04.073 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # waitforlisten 95233 00:42:04.073 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 95233 ']' 00:42:04.073 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:04.073 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:04.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:04.073 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:04.073 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:04.073 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:42:04.073 [2024-10-01 20:40:59.273740] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:42:04.073 [2024-10-01 20:40:59.273835] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:04.333 [2024-10-01 20:40:59.415903] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:04.333 [2024-10-01 20:40:59.486581] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:04.333 [2024-10-01 20:40:59.486647] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:04.333 [2024-10-01 20:40:59.486662] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:04.333 [2024-10-01 20:40:59.486672] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:04.333 [2024-10-01 20:40:59.486681] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:04.333 [2024-10-01 20:40:59.489155] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:42:04.333 [2024-10-01 20:40:59.489212] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:42:04.592 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:04.592 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:42:04.592 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:42:04.592 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:04.592 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:42:04.592 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:04.592 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=95233 00:42:04.592 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:42:04.850 [2024-10-01 20:40:59.968875] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:04.850 20:40:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:42:05.108 Malloc0 00:42:05.108 20:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:42:05.367 20:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:05.625 20:41:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:42:05.884 [2024-10-01 20:41:01.087428] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:42:05.884 20:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:42:06.143 [2024-10-01 20:41:01.315581] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:42:06.143 20:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=95322 00:42:06.143 20:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:42:06.143 20:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 95322 /var/tmp/bdevperf.sock 00:42:06.143 20:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 95322 ']' 00:42:06.143 20:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:42:06.143 20:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:06.143 20:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:42:06.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:42:06.143 20:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:06.143 20:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:42:06.143 20:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:42:06.710 20:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:06.710 20:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:42:06.710 20:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:42:06.710 20:41:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:42:07.276 Nvme0n1 00:42:07.276 20:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:42:07.535 Nvme0n1 00:42:07.535 20:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:42:07.535 20:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:42:08.467 20:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:42:08.467 20:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:42:08.768 20:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:42:09.026 20:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:42:09.026 20:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95233 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:42:09.026 20:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95397 00:42:09.026 20:41:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:42:15.596 20:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:42:15.596 20:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:42:15.596 20:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:42:15.596 20:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:42:15.596 Attaching 4 probes... 00:42:15.596 @path[10.0.0.3, 4421]: 19467 00:42:15.596 @path[10.0.0.3, 4421]: 19932 00:42:15.596 @path[10.0.0.3, 4421]: 20075 00:42:15.596 @path[10.0.0.3, 4421]: 20036 00:42:15.596 @path[10.0.0.3, 4421]: 20126 00:42:15.596 20:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:42:15.596 20:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:42:15.596 20:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:42:15.596 20:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:42:15.596 20:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:42:15.596 20:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:42:15.596 20:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95397 00:42:15.596 20:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:42:15.596 20:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:42:15.597 20:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:42:15.597 20:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:42:15.856 20:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:42:15.856 20:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95527 00:42:15.856 20:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95233 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:42:15.856 20:41:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:42:22.428 20:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:42:22.428 20:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:42:22.428 20:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:42:22.428 20:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:42:22.428 Attaching 4 probes... 00:42:22.428 @path[10.0.0.3, 4420]: 19484 00:42:22.428 @path[10.0.0.3, 4420]: 19921 00:42:22.428 @path[10.0.0.3, 4420]: 19947 00:42:22.428 @path[10.0.0.3, 4420]: 19939 00:42:22.428 @path[10.0.0.3, 4420]: 19713 00:42:22.428 20:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:42:22.428 20:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:42:22.428 20:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:42:22.428 20:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:42:22.428 20:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:42:22.428 20:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:42:22.428 20:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95527 00:42:22.428 20:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:42:22.428 20:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:42:22.428 20:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:42:22.428 20:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:42:22.687 20:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:42:22.687 20:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95659 00:42:22.687 20:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95233 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:42:22.687 20:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:42:29.270 20:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:42:29.270 20:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:42:29.270 20:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:42:29.270 20:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:42:29.270 Attaching 4 probes... 00:42:29.270 @path[10.0.0.3, 4421]: 14933 00:42:29.270 @path[10.0.0.3, 4421]: 19727 00:42:29.270 @path[10.0.0.3, 4421]: 19685 00:42:29.270 @path[10.0.0.3, 4421]: 19774 00:42:29.270 @path[10.0.0.3, 4421]: 19981 00:42:29.270 20:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:42:29.270 20:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:42:29.270 20:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:42:29.270 20:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:42:29.270 20:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:42:29.270 20:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:42:29.270 20:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95659 00:42:29.270 20:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:42:29.270 20:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:42:29.270 20:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:42:29.270 20:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:42:29.530 20:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:42:29.530 20:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95233 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:42:29.530 20:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95795 00:42:29.530 20:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:42:36.095 20:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:42:36.095 20:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:42:36.095 20:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:42:36.095 20:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:42:36.095 Attaching 4 probes... 00:42:36.095 00:42:36.095 00:42:36.095 00:42:36.095 00:42:36.095 00:42:36.095 20:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:42:36.095 20:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:42:36.095 20:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:42:36.095 20:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:42:36.095 20:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:42:36.095 20:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:42:36.095 20:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95795 00:42:36.095 20:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:42:36.095 20:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:42:36.095 20:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:42:36.095 20:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:42:36.354 20:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:42:36.613 20:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95233 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:42:36.613 20:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95930 00:42:36.613 20:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:42:43.178 20:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:42:43.178 20:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:42:43.178 20:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:42:43.178 20:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:42:43.178 Attaching 4 probes... 00:42:43.178 @path[10.0.0.3, 4421]: 18923 00:42:43.178 @path[10.0.0.3, 4421]: 19878 00:42:43.178 @path[10.0.0.3, 4421]: 19947 00:42:43.178 @path[10.0.0.3, 4421]: 19694 00:42:43.178 @path[10.0.0.3, 4421]: 19803 00:42:43.178 20:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:42:43.178 20:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:42:43.178 20:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:42:43.178 20:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:42:43.178 20:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:42:43.178 20:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:42:43.178 20:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95930 00:42:43.178 20:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:42:43.178 20:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:42:43.178 [2024-10-01 20:41:38.210280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3a100 is same with the state(6) to be set 00:42:43.178 [2024-10-01 20:41:38.210333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3a100 is same with the state(6) to be set 00:42:43.178 [2024-10-01 20:41:38.210345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3a100 is same with the state(6) to be set 00:42:43.178 [2024-10-01 20:41:38.210354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3a100 is same with the state(6) to be set 00:42:43.178 [2024-10-01 20:41:38.210367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3a100 is same with the state(6) to be set 00:42:43.178 [2024-10-01 20:41:38.210375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3a100 is same with the state(6) to be set 00:42:43.178 [2024-10-01 20:41:38.210384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3a100 is same with the state(6) to be set 00:42:43.178 20:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:42:44.127 20:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:42:44.127 20:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96062 00:42:44.127 20:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95233 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:42:44.127 20:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:42:50.774 20:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:42:50.774 20:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:42:50.774 20:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:42:50.774 20:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:42:50.774 Attaching 4 probes... 00:42:50.774 @path[10.0.0.3, 4420]: 18854 00:42:50.774 @path[10.0.0.3, 4420]: 19468 00:42:50.774 @path[10.0.0.3, 4420]: 19121 00:42:50.774 @path[10.0.0.3, 4420]: 19263 00:42:50.774 @path[10.0.0.3, 4420]: 19378 00:42:50.774 20:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:42:50.774 20:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:42:50.774 20:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:42:50.774 20:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:42:50.774 20:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:42:50.774 20:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:42:50.774 20:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96062 00:42:50.774 20:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:42:50.774 20:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:42:50.774 [2024-10-01 20:41:45.785285] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:42:50.774 20:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:42:51.034 20:41:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:42:57.604 20:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:42:57.604 20:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96261 00:42:57.604 20:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95233 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:42:57.604 20:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:43:02.877 20:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:43:02.877 20:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:43:03.470 20:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:43:03.470 20:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:43:03.470 Attaching 4 probes... 00:43:03.470 @path[10.0.0.3, 4421]: 18852 00:43:03.470 @path[10.0.0.3, 4421]: 19013 00:43:03.470 @path[10.0.0.3, 4421]: 19420 00:43:03.470 @path[10.0.0.3, 4421]: 19320 00:43:03.470 @path[10.0.0.3, 4421]: 19267 00:43:03.470 20:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:43:03.470 20:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:43:03.470 20:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:43:03.470 20:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:43:03.470 20:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:43:03.470 20:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:43:03.470 20:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96261 00:43:03.470 20:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:43:03.470 20:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 95322 00:43:03.470 20:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 95322 ']' 00:43:03.470 20:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 95322 00:43:03.470 20:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:43:03.471 20:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:03.471 20:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95322 00:43:03.471 20:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:43:03.471 20:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:43:03.471 killing process with pid 95322 00:43:03.471 20:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95322' 00:43:03.471 20:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 95322 00:43:03.471 20:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 95322 00:43:03.471 { 00:43:03.471 "results": [ 00:43:03.471 { 00:43:03.471 "job": "Nvme0n1", 00:43:03.471 "core_mask": "0x4", 00:43:03.471 "workload": "verify", 00:43:03.471 "status": "terminated", 00:43:03.471 "verify_range": { 00:43:03.471 "start": 0, 00:43:03.471 "length": 16384 00:43:03.471 }, 00:43:03.471 "queue_depth": 128, 00:43:03.471 "io_size": 4096, 00:43:03.471 "runtime": 55.696534, 00:43:03.471 "iops": 8333.08586132128, 00:43:03.471 "mibps": 32.55111664578625, 00:43:03.471 "io_failed": 0, 00:43:03.471 "io_timeout": 0, 00:43:03.471 "avg_latency_us": 15334.682066312997, 00:43:03.471 "min_latency_us": 1735.2145454545455, 00:43:03.471 "max_latency_us": 7076934.749090909 00:43:03.471 } 00:43:03.471 ], 00:43:03.471 "core_count": 1 00:43:03.471 } 00:43:03.471 20:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 95322 00:43:03.471 20:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:43:03.471 [2024-10-01 20:41:01.389601] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:43:03.471 [2024-10-01 20:41:01.389680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95322 ] 00:43:03.471 [2024-10-01 20:41:01.520817] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:03.471 [2024-10-01 20:41:01.572943] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:43:03.471 [2024-10-01 20:41:02.602734] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:43:03.471 Running I/O for 90 seconds... 00:43:03.471 10532.00 IOPS, 41.14 MiB/s 10288.00 IOPS, 40.19 MiB/s 10170.67 IOPS, 39.73 MiB/s 10125.50 IOPS, 39.55 MiB/s 10103.40 IOPS, 39.47 MiB/s 10086.67 IOPS, 39.40 MiB/s 10090.71 IOPS, 39.42 MiB/s 10052.50 IOPS, 39.27 MiB/s [2024-10-01 20:41:11.024764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.471 [2024-10-01 20:41:11.024824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:43:03.471 [2024-10-01 20:41:11.024891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.471 [2024-10-01 20:41:11.024911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:43:03.471 [2024-10-01 20:41:11.024931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.471 [2024-10-01 20:41:11.024946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:43:03.471 [2024-10-01 20:41:11.024966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.471 [2024-10-01 20:41:11.024980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:43:03.471 [2024-10-01 20:41:11.024999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.471 [2024-10-01 20:41:11.025013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:43:03.471 [2024-10-01 20:41:11.025033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.471 [2024-10-01 20:41:11.025046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:43:03.471 [2024-10-01 20:41:11.025065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.471 [2024-10-01 20:41:11.025079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:43:03.471 [2024-10-01 20:41:11.025098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.471 [2024-10-01 20:41:11.025112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:43:03.471 [2024-10-01 20:41:11.025181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.471 [2024-10-01 20:41:11.025199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:43:03.471 [2024-10-01 20:41:11.025249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.471 [2024-10-01 20:41:11.025268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:43:03.471 [2024-10-01 20:41:11.025291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.471 [2024-10-01 20:41:11.025307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:43:03.471 [2024-10-01 20:41:11.025330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.471 [2024-10-01 20:41:11.025345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:03.471 [2024-10-01 20:41:11.025367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.471 [2024-10-01 20:41:11.025383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:43:03.471 [2024-10-01 20:41:11.025405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.471 [2024-10-01 20:41:11.025421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:43:03.471 [2024-10-01 20:41:11.025443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.471 [2024-10-01 20:41:11.025474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:43:03.471 [2024-10-01 20:41:11.025526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.471 [2024-10-01 20:41:11.025540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:43:03.471 [2024-10-01 20:41:11.025560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.471 [2024-10-01 20:41:11.025574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:43:03.471 [2024-10-01 20:41:11.025595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.471 [2024-10-01 20:41:11.025609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:43:03.471 [2024-10-01 20:41:11.025629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.471 [2024-10-01 20:41:11.025644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:43:03.471 [2024-10-01 20:41:11.025664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.471 [2024-10-01 20:41:11.025678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:43:03.471 [2024-10-01 20:41:11.025698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.471 [2024-10-01 20:41:11.025712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:43:03.472 [2024-10-01 20:41:11.025732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.472 [2024-10-01 20:41:11.025755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:43:03.472 [2024-10-01 20:41:11.025777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.472 [2024-10-01 20:41:11.025792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:43:03.472 [2024-10-01 20:41:11.025829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.472 [2024-10-01 20:41:11.025844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:43:03.472 [2024-10-01 20:41:11.025865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.472 [2024-10-01 20:41:11.025880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:43:03.472 [2024-10-01 20:41:11.025901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.472 [2024-10-01 20:41:11.025916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:43:03.472 [2024-10-01 20:41:11.025966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.472 [2024-10-01 20:41:11.025981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:43:03.472 [2024-10-01 20:41:11.026018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.472 [2024-10-01 20:41:11.026034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:43:03.472 [2024-10-01 20:41:11.026055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.472 [2024-10-01 20:41:11.026070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:43:03.472 [2024-10-01 20:41:11.026091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.472 [2024-10-01 20:41:11.026107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:43:03.472 [2024-10-01 20:41:11.026145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.472 [2024-10-01 20:41:11.026161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:43:03.472 [2024-10-01 20:41:11.026183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.472 [2024-10-01 20:41:11.026200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:43:03.472 [2024-10-01 20:41:11.026223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.472 [2024-10-01 20:41:11.026257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:43:03.472 [2024-10-01 20:41:11.026282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.472 [2024-10-01 20:41:11.026299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:43:03.472 [2024-10-01 20:41:11.026331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.472 [2024-10-01 20:41:11.026349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:43:03.472 [2024-10-01 20:41:11.026373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.472 [2024-10-01 20:41:11.026389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:43:03.472 [2024-10-01 20:41:11.026412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.472 [2024-10-01 20:41:11.026428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:43:03.472 [2024-10-01 20:41:11.026466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.472 [2024-10-01 20:41:11.026482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:43:03.472 [2024-10-01 20:41:11.026520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.472 [2024-10-01 20:41:11.026535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:43:03.472 [2024-10-01 20:41:11.026557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.472 [2024-10-01 20:41:11.026573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:43:03.472 [2024-10-01 20:41:11.026594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.472 [2024-10-01 20:41:11.026610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:43:03.472 [2024-10-01 20:41:11.026632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.472 [2024-10-01 20:41:11.026647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:43:03.472 [2024-10-01 20:41:11.026668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.472 [2024-10-01 20:41:11.026684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:43:03.472 [2024-10-01 20:41:11.026705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.472 [2024-10-01 20:41:11.026721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:03.472 [2024-10-01 20:41:11.026742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.472 [2024-10-01 20:41:11.026758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:43:03.472 [2024-10-01 20:41:11.026779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.472 [2024-10-01 20:41:11.026794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:43:03.472 [2024-10-01 20:41:11.026822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.472 [2024-10-01 20:41:11.026839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:43:03.472 [2024-10-01 20:41:11.026860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:7000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.472 [2024-10-01 20:41:11.026875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:43:03.472 [2024-10-01 20:41:11.026896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.472 [2024-10-01 20:41:11.026912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:43:03.473 [2024-10-01 20:41:11.029694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.473 [2024-10-01 20:41:11.029733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:43:03.473 [2024-10-01 20:41:11.029765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.473 [2024-10-01 20:41:11.029784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:43:03.473 [2024-10-01 20:41:11.029807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.473 [2024-10-01 20:41:11.029823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:43:03.473 [2024-10-01 20:41:11.029845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.473 [2024-10-01 20:41:11.029861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:43:03.473 [2024-10-01 20:41:11.029883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.473 [2024-10-01 20:41:11.029899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:43:03.473 [2024-10-01 20:41:11.029921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.473 [2024-10-01 20:41:11.029936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:43:03.473 [2024-10-01 20:41:11.029958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.473 [2024-10-01 20:41:11.029974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:43:03.473 [2024-10-01 20:41:11.029995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.473 [2024-10-01 20:41:11.030011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:43:03.473 [2024-10-01 20:41:11.030033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.473 [2024-10-01 20:41:11.030049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:43:03.473 [2024-10-01 20:41:11.030071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.473 [2024-10-01 20:41:11.030099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:43:03.473 [2024-10-01 20:41:11.030155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.473 [2024-10-01 20:41:11.030174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:43:03.473 [2024-10-01 20:41:11.030198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.473 [2024-10-01 20:41:11.030215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:03.473 [2024-10-01 20:41:11.030238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.473 [2024-10-01 20:41:11.030254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:43:03.473 [2024-10-01 20:41:11.030277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.473 [2024-10-01 20:41:11.030294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:43:03.473 [2024-10-01 20:41:11.030326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.473 [2024-10-01 20:41:11.030342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:43:03.473 [2024-10-01 20:41:11.030365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.473 [2024-10-01 20:41:11.030382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:43:03.473 [2024-10-01 20:41:11.030405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.473 [2024-10-01 20:41:11.030422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:43:03.473 [2024-10-01 20:41:11.030459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.473 [2024-10-01 20:41:11.030490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:43:03.473 [2024-10-01 20:41:11.030511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.473 [2024-10-01 20:41:11.030527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:43:03.473 [2024-10-01 20:41:11.030548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.473 [2024-10-01 20:41:11.030564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:43:03.473 [2024-10-01 20:41:11.030585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.473 [2024-10-01 20:41:11.030601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:43:03.473 [2024-10-01 20:41:11.030622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.473 [2024-10-01 20:41:11.030646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:43:03.473 [2024-10-01 20:41:11.030669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.473 [2024-10-01 20:41:11.030686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:43:03.473 [2024-10-01 20:41:11.030707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.473 [2024-10-01 20:41:11.030723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:43:03.473 [2024-10-01 20:41:11.030744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.473 [2024-10-01 20:41:11.030759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:43:03.473 [2024-10-01 20:41:11.030780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.473 [2024-10-01 20:41:11.030796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:43:03.473 [2024-10-01 20:41:11.030817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.473 [2024-10-01 20:41:11.030833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:03.473 [2024-10-01 20:41:11.030855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.473 [2024-10-01 20:41:11.030870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:43:03.473 [2024-10-01 20:41:11.030891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.473 [2024-10-01 20:41:11.030907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:43:03.473 [2024-10-01 20:41:11.030928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.473 [2024-10-01 20:41:11.030944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:43:03.473 [2024-10-01 20:41:11.030965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.473 [2024-10-01 20:41:11.030981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:43:03.474 [2024-10-01 20:41:11.031002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.474 [2024-10-01 20:41:11.031018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:43:03.474 [2024-10-01 20:41:11.031039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.474 [2024-10-01 20:41:11.031056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:43:03.474 [2024-10-01 20:41:11.031077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.474 [2024-10-01 20:41:11.031093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:43:03.474 [2024-10-01 20:41:11.031689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.474 [2024-10-01 20:41:11.031716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:43:03.474 9990.56 IOPS, 39.03 MiB/s 9976.80 IOPS, 38.97 MiB/s 9975.64 IOPS, 38.97 MiB/s 9973.33 IOPS, 38.96 MiB/s 9982.92 IOPS, 39.00 MiB/s 9972.50 IOPS, 38.96 MiB/s [2024-10-01 20:41:17.621903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.474 [2024-10-01 20:41:17.621955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:43:03.474 [2024-10-01 20:41:17.622003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.474 [2024-10-01 20:41:17.622019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:43:03.474 [2024-10-01 20:41:17.622039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.474 [2024-10-01 20:41:17.622054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:43:03.474 [2024-10-01 20:41:17.622073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.474 [2024-10-01 20:41:17.622087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:43:03.474 [2024-10-01 20:41:17.622106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.474 [2024-10-01 20:41:17.622137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:43:03.474 [2024-10-01 20:41:17.622191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.474 [2024-10-01 20:41:17.622209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:43:03.474 [2024-10-01 20:41:17.622232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.474 [2024-10-01 20:41:17.622249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:43:03.474 [2024-10-01 20:41:17.622271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.474 [2024-10-01 20:41:17.622287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:43:03.474 [2024-10-01 20:41:17.622310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.474 [2024-10-01 20:41:17.622327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:43:03.474 [2024-10-01 20:41:17.622350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.474 [2024-10-01 20:41:17.622366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:43:03.474 [2024-10-01 20:41:17.622389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.474 [2024-10-01 20:41:17.622430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:43:03.474 [2024-10-01 20:41:17.622471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.474 [2024-10-01 20:41:17.622502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:43:03.474 [2024-10-01 20:41:17.622552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.474 [2024-10-01 20:41:17.622566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:43:03.474 [2024-10-01 20:41:17.622585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.474 [2024-10-01 20:41:17.622599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:43:03.474 [2024-10-01 20:41:17.622617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.474 [2024-10-01 20:41:17.622631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:43:03.474 [2024-10-01 20:41:17.622651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.474 [2024-10-01 20:41:17.622665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:43:03.474 [2024-10-01 20:41:17.622684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.474 [2024-10-01 20:41:17.622698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:43:03.474 [2024-10-01 20:41:17.622718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.474 [2024-10-01 20:41:17.622732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:43:03.474 [2024-10-01 20:41:17.622751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.474 [2024-10-01 20:41:17.622765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:43:03.474 [2024-10-01 20:41:17.622784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.474 [2024-10-01 20:41:17.622798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:43:03.474 [2024-10-01 20:41:17.622817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.474 [2024-10-01 20:41:17.622831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:43:03.474 [2024-10-01 20:41:17.622850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.474 [2024-10-01 20:41:17.622863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:03.474 [2024-10-01 20:41:17.622882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.474 [2024-10-01 20:41:17.622896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:43:03.474 [2024-10-01 20:41:17.622938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.474 [2024-10-01 20:41:17.622954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:43:03.474 [2024-10-01 20:41:17.622973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.475 [2024-10-01 20:41:17.622987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:43:03.475 [2024-10-01 20:41:17.623024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.475 [2024-10-01 20:41:17.623039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:43:03.475 [2024-10-01 20:41:17.623059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.475 [2024-10-01 20:41:17.623074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:43:03.475 [2024-10-01 20:41:17.623094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.475 [2024-10-01 20:41:17.623109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:43:03.475 [2024-10-01 20:41:17.623162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.475 [2024-10-01 20:41:17.623178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:43:03.475 [2024-10-01 20:41:17.623201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.475 [2024-10-01 20:41:17.623217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:43:03.475 [2024-10-01 20:41:17.623257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.475 [2024-10-01 20:41:17.623275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:43:03.475 [2024-10-01 20:41:17.623298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.475 [2024-10-01 20:41:17.623315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:43:03.475 [2024-10-01 20:41:17.625018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.475 [2024-10-01 20:41:17.625046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:43:03.475 [2024-10-01 20:41:17.625070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.475 [2024-10-01 20:41:17.625086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:43:03.475 [2024-10-01 20:41:17.625107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.475 [2024-10-01 20:41:17.625139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:43:03.475 [2024-10-01 20:41:17.625203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.475 [2024-10-01 20:41:17.625224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:43:03.475 [2024-10-01 20:41:17.625247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.475 [2024-10-01 20:41:17.625264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:43:03.475 [2024-10-01 20:41:17.625287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.475 [2024-10-01 20:41:17.625303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:43:03.475 [2024-10-01 20:41:17.625325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.475 [2024-10-01 20:41:17.625342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:43:03.475 [2024-10-01 20:41:17.625365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.475 [2024-10-01 20:41:17.625381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:43:03.475 [2024-10-01 20:41:17.625403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.475 [2024-10-01 20:41:17.625420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:43:03.475 [2024-10-01 20:41:17.625443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.475 [2024-10-01 20:41:17.625460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:43:03.475 [2024-10-01 20:41:17.625497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.475 [2024-10-01 20:41:17.625528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:43:03.475 [2024-10-01 20:41:17.625562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.475 [2024-10-01 20:41:17.625576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:43:03.475 [2024-10-01 20:41:17.625595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.475 [2024-10-01 20:41:17.625610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:43:03.475 [2024-10-01 20:41:17.625629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.475 [2024-10-01 20:41:17.625643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:43:03.475 [2024-10-01 20:41:17.625662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.475 [2024-10-01 20:41:17.625676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:43:03.475 [2024-10-01 20:41:17.625696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.475 [2024-10-01 20:41:17.625718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:43:03.475 [2024-10-01 20:41:17.625739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.475 [2024-10-01 20:41:17.625753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:43:03.475 [2024-10-01 20:41:17.625772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.475 [2024-10-01 20:41:17.625787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:43:03.475 [2024-10-01 20:41:17.625806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.475 [2024-10-01 20:41:17.625820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:43:03.475 [2024-10-01 20:41:17.625840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.475 [2024-10-01 20:41:17.625854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:43:03.475 [2024-10-01 20:41:17.625873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.475 [2024-10-01 20:41:17.625887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:43:03.475 [2024-10-01 20:41:17.625906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.476 [2024-10-01 20:41:17.625920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:03.476 [2024-10-01 20:41:17.625939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.476 [2024-10-01 20:41:17.625953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:43:03.476 [2024-10-01 20:41:17.625972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.476 [2024-10-01 20:41:17.625986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:43:03.476 [2024-10-01 20:41:17.626006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.476 [2024-10-01 20:41:17.626020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:43:03.476 [2024-10-01 20:41:17.626039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.476 [2024-10-01 20:41:17.626053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:43:03.476 [2024-10-01 20:41:17.626073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.476 [2024-10-01 20:41:17.626087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:43:03.476 [2024-10-01 20:41:17.626107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.476 [2024-10-01 20:41:17.626161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:43:03.476 [2024-10-01 20:41:17.626196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.476 [2024-10-01 20:41:17.626216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:43:03.476 [2024-10-01 20:41:17.626239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.476 [2024-10-01 20:41:17.626255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:43:03.476 [2024-10-01 20:41:17.626277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.476 [2024-10-01 20:41:17.626294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:43:03.476 [2024-10-01 20:41:17.626322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.476 [2024-10-01 20:41:17.626340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:43:03.476 [2024-10-01 20:41:17.626362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.476 [2024-10-01 20:41:17.626378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:43:03.476 [2024-10-01 20:41:17.626401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.476 [2024-10-01 20:41:17.626417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:43:03.476 [2024-10-01 20:41:17.626443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.476 [2024-10-01 20:41:17.626460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:43:03.476 [2024-10-01 20:41:17.626498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.476 [2024-10-01 20:41:17.626543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:43:03.476 [2024-10-01 20:41:17.626577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.476 [2024-10-01 20:41:17.626592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:43:03.476 [2024-10-01 20:41:17.626611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.476 [2024-10-01 20:41:17.626624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:43:03.476 [2024-10-01 20:41:17.626643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.476 [2024-10-01 20:41:17.626657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:03.476 [2024-10-01 20:41:17.626677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.476 [2024-10-01 20:41:17.626691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:43:03.476 [2024-10-01 20:41:17.628335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.476 [2024-10-01 20:41:17.628363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:43:03.476 [2024-10-01 20:41:17.628389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.476 [2024-10-01 20:41:17.628405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:43:03.477 [2024-10-01 20:41:17.628426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.477 [2024-10-01 20:41:17.628455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:43:03.477 [2024-10-01 20:41:17.628474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.477 [2024-10-01 20:41:17.628488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:43:03.477 [2024-10-01 20:41:17.628507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.477 [2024-10-01 20:41:17.628521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:43:03.477 [2024-10-01 20:41:17.628541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.477 [2024-10-01 20:41:17.628555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:43:03.477 [2024-10-01 20:41:17.628574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.477 [2024-10-01 20:41:17.628589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:43:03.477 [2024-10-01 20:41:17.628611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.477 [2024-10-01 20:41:17.628626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:43:03.477 [2024-10-01 20:41:17.628645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.477 [2024-10-01 20:41:17.628659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:43:03.477 [2024-10-01 20:41:17.628679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.477 [2024-10-01 20:41:17.628693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:43:03.477 [2024-10-01 20:41:17.628714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.477 [2024-10-01 20:41:17.628729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:43:03.477 [2024-10-01 20:41:17.628749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.477 [2024-10-01 20:41:17.628763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:43:03.477 [2024-10-01 20:41:17.628794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.477 [2024-10-01 20:41:17.628810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:43:03.477 [2024-10-01 20:41:17.628829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.477 [2024-10-01 20:41:17.628843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:03.477 [2024-10-01 20:41:17.628863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.477 [2024-10-01 20:41:17.628877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:43:03.477 [2024-10-01 20:41:17.628896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.477 [2024-10-01 20:41:17.628910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:43:03.477 [2024-10-01 20:41:17.628930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.477 [2024-10-01 20:41:17.628944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:43:03.477 [2024-10-01 20:41:17.628963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.477 [2024-10-01 20:41:17.628977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:43:03.477 [2024-10-01 20:41:17.628997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.477 [2024-10-01 20:41:17.629011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:43:03.477 [2024-10-01 20:41:17.629030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.477 [2024-10-01 20:41:17.629045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:43:03.477 [2024-10-01 20:41:17.629064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.477 [2024-10-01 20:41:17.629078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:43:03.477 [2024-10-01 20:41:17.629098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.477 [2024-10-01 20:41:17.629112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:43:03.477 [2024-10-01 20:41:17.629147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.477 [2024-10-01 20:41:17.629161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:43:03.477 [2024-10-01 20:41:17.629199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.477 [2024-10-01 20:41:17.629215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:43:03.477 [2024-10-01 20:41:17.629235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.477 [2024-10-01 20:41:17.629261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:43:03.477 [2024-10-01 20:41:17.629283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.477 [2024-10-01 20:41:17.629298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:43:03.477 [2024-10-01 20:41:17.629320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.477 [2024-10-01 20:41:17.629335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:43:03.477 [2024-10-01 20:41:17.629356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.477 [2024-10-01 20:41:17.629371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:43:03.477 [2024-10-01 20:41:17.629390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.477 [2024-10-01 20:41:17.629405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:03.477 [2024-10-01 20:41:17.629424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.477 [2024-10-01 20:41:17.629440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:43:03.477 [2024-10-01 20:41:17.629460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.477 [2024-10-01 20:41:17.629489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:43:03.477 [2024-10-01 20:41:17.629508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.477 [2024-10-01 20:41:17.629523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:43:03.477 [2024-10-01 20:41:17.629871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.477 [2024-10-01 20:41:17.629896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:43:03.477 [2024-10-01 20:41:17.629920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.477 [2024-10-01 20:41:17.629935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:43:03.478 [2024-10-01 20:41:17.629954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.478 [2024-10-01 20:41:17.629968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:43:03.478 [2024-10-01 20:41:17.629988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.478 [2024-10-01 20:41:17.630002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:43:03.478 [2024-10-01 20:41:17.630021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.478 [2024-10-01 20:41:17.630035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:43:03.478 [2024-10-01 20:41:17.630064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.478 [2024-10-01 20:41:17.630080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:43:03.478 [2024-10-01 20:41:17.630099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.478 [2024-10-01 20:41:17.630113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:43:03.478 [2024-10-01 20:41:17.630179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.478 [2024-10-01 20:41:17.630196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:43:03.478 [2024-10-01 20:41:17.630216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.478 [2024-10-01 20:41:17.630231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:43:03.478 [2024-10-01 20:41:17.630252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.478 [2024-10-01 20:41:17.630267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:43:03.478 [2024-10-01 20:41:17.630288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.478 [2024-10-01 20:41:17.630304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:43:03.478 [2024-10-01 20:41:17.630324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.478 [2024-10-01 20:41:17.630339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:43:03.478 [2024-10-01 20:41:17.630359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.478 [2024-10-01 20:41:17.630374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.478 [2024-10-01 20:41:17.630395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.478 [2024-10-01 20:41:17.630410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:03.478 [2024-10-01 20:41:17.630430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.478 [2024-10-01 20:41:17.630445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:43:03.478 [2024-10-01 20:41:17.630465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.478 [2024-10-01 20:41:17.630480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:43:03.478 [2024-10-01 20:41:17.630515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.478 [2024-10-01 20:41:17.630544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:43:03.478 [2024-10-01 20:41:17.630572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.478 [2024-10-01 20:41:17.630601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:43:03.478 [2024-10-01 20:41:17.630620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.478 [2024-10-01 20:41:17.630635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:43:03.478 [2024-10-01 20:41:17.630655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.478 [2024-10-01 20:41:17.630669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:43:03.478 [2024-10-01 20:41:17.630688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.478 [2024-10-01 20:41:17.630702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:43:03.478 [2024-10-01 20:41:17.630721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.478 [2024-10-01 20:41:17.630736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:43:03.478 [2024-10-01 20:41:17.630755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.478 [2024-10-01 20:41:17.630769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:43:03.478 [2024-10-01 20:41:17.630789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.478 [2024-10-01 20:41:17.630803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:43:03.478 [2024-10-01 20:41:17.630822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.478 [2024-10-01 20:41:17.630836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:43:03.478 [2024-10-01 20:41:17.630855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.478 [2024-10-01 20:41:17.630869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:43:03.478 [2024-10-01 20:41:17.630889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.478 [2024-10-01 20:41:17.630903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:43:03.478 [2024-10-01 20:41:17.630938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.478 [2024-10-01 20:41:17.630953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:43:03.478 [2024-10-01 20:41:17.630972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.478 [2024-10-01 20:41:17.630987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:43:03.478 [2024-10-01 20:41:17.631007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.478 [2024-10-01 20:41:17.631029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:43:03.478 [2024-10-01 20:41:17.631051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.478 [2024-10-01 20:41:17.631066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:43:03.478 [2024-10-01 20:41:17.631086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.478 [2024-10-01 20:41:17.631101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:43:03.479 [2024-10-01 20:41:17.631136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.479 [2024-10-01 20:41:17.631166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:43:03.479 [2024-10-01 20:41:17.631187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.479 [2024-10-01 20:41:17.631202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:43:03.479 [2024-10-01 20:41:17.631223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.479 [2024-10-01 20:41:17.631238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:43:03.479 [2024-10-01 20:41:17.631258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.479 [2024-10-01 20:41:17.631273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:43:03.479 [2024-10-01 20:41:17.631294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.479 [2024-10-01 20:41:17.631310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:43:03.479 [2024-10-01 20:41:17.631330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.479 [2024-10-01 20:41:17.631346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:43:03.479 [2024-10-01 20:41:17.631381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.479 [2024-10-01 20:41:17.631396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:43:03.479 [2024-10-01 20:41:17.631416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.479 [2024-10-01 20:41:17.631430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:43:03.479 [2024-10-01 20:41:17.631450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.479 [2024-10-01 20:41:17.631465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:43:03.479 [2024-10-01 20:41:17.631500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.479 [2024-10-01 20:41:17.631520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:43:03.479 [2024-10-01 20:41:17.631542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.479 [2024-10-01 20:41:17.631556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:43:03.479 [2024-10-01 20:41:17.631576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.479 [2024-10-01 20:41:17.631590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:43:03.479 [2024-10-01 20:41:17.631610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.479 [2024-10-01 20:41:17.631624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:43:03.479 [2024-10-01 20:41:17.631643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.479 [2024-10-01 20:41:17.631658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:03.479 [2024-10-01 20:41:17.631677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.479 [2024-10-01 20:41:17.631691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:43:03.479 [2024-10-01 20:41:17.631711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.479 [2024-10-01 20:41:17.631725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:43:03.479 [2024-10-01 20:41:17.631744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.479 [2024-10-01 20:41:17.631758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:43:03.479 [2024-10-01 20:41:17.631780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.479 [2024-10-01 20:41:17.631795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:43:03.479 [2024-10-01 20:41:17.631815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.479 [2024-10-01 20:41:17.631855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:43:03.479 [2024-10-01 20:41:17.631877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.479 [2024-10-01 20:41:17.631892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:43:03.479 [2024-10-01 20:41:17.631912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.479 [2024-10-01 20:41:17.631927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:43:03.479 [2024-10-01 20:41:17.631948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.479 [2024-10-01 20:41:17.631963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:43:03.479 [2024-10-01 20:41:17.631991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.479 [2024-10-01 20:41:17.632007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:43:03.479 [2024-10-01 20:41:17.632027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.479 [2024-10-01 20:41:17.632042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:43:03.479 [2024-10-01 20:41:17.632063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.479 [2024-10-01 20:41:17.632078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:43:03.479 [2024-10-01 20:41:17.632098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.479 [2024-10-01 20:41:17.632113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:43:03.479 [2024-10-01 20:41:17.632146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.479 [2024-10-01 20:41:17.632186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:43:03.479 [2024-10-01 20:41:17.632205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.479 [2024-10-01 20:41:17.632220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:43:03.479 [2024-10-01 20:41:17.632240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.479 [2024-10-01 20:41:17.632254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:43:03.479 [2024-10-01 20:41:17.632274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.479 [2024-10-01 20:41:17.632289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:43:03.479 [2024-10-01 20:41:17.632308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.480 [2024-10-01 20:41:17.632323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:43:03.480 [2024-10-01 20:41:17.632343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.480 [2024-10-01 20:41:17.632358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:43:03.480 [2024-10-01 20:41:17.633099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.480 [2024-10-01 20:41:17.633125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:43:03.480 [2024-10-01 20:41:17.633162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.480 [2024-10-01 20:41:17.633181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:43:03.480 [2024-10-01 20:41:17.633200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.480 [2024-10-01 20:41:17.633226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:43:03.480 [2024-10-01 20:41:17.633247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.480 [2024-10-01 20:41:17.633262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:43:03.480 [2024-10-01 20:41:17.633281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.480 [2024-10-01 20:41:17.633295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:43:03.480 [2024-10-01 20:41:17.633315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.480 [2024-10-01 20:41:17.633329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:43:03.480 [2024-10-01 20:41:17.633349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.480 [2024-10-01 20:41:17.633363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:43:03.480 [2024-10-01 20:41:17.633382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.480 [2024-10-01 20:41:17.633397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:43:03.480 [2024-10-01 20:41:17.633415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.480 [2024-10-01 20:41:17.633430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:43:03.480 [2024-10-01 20:41:17.633452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.480 [2024-10-01 20:41:17.633467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:43:03.480 [2024-10-01 20:41:17.633487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.480 [2024-10-01 20:41:17.633501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:43:03.480 [2024-10-01 20:41:17.633520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.480 [2024-10-01 20:41:17.633534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:43:03.480 [2024-10-01 20:41:17.633554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.480 [2024-10-01 20:41:17.633568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:43:03.480 [2024-10-01 20:41:17.633587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.480 [2024-10-01 20:41:17.633601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:03.480 [2024-10-01 20:41:17.633620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.480 [2024-10-01 20:41:17.633641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:43:03.480 [2024-10-01 20:41:17.633662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.480 [2024-10-01 20:41:17.633677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:43:03.480 [2024-10-01 20:41:17.633696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.480 [2024-10-01 20:41:17.633710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:43:03.480 [2024-10-01 20:41:17.633733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.480 [2024-10-01 20:41:17.633748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:43:03.480 [2024-10-01 20:41:17.633768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.480 [2024-10-01 20:41:17.633782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:43:03.480 [2024-10-01 20:41:17.633801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.480 [2024-10-01 20:41:17.633815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:43:03.480 [2024-10-01 20:41:17.633834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.480 [2024-10-01 20:41:17.633848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:43:03.480 [2024-10-01 20:41:17.633868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.480 [2024-10-01 20:41:17.633882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:43:03.480 [2024-10-01 20:41:17.633901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.480 [2024-10-01 20:41:17.633915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:43:03.480 [2024-10-01 20:41:17.633934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.480 [2024-10-01 20:41:17.633948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:43:03.480 [2024-10-01 20:41:17.633967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.480 [2024-10-01 20:41:17.633981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:43:03.480 [2024-10-01 20:41:17.634003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.480 [2024-10-01 20:41:17.634018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:43:03.480 [2024-10-01 20:41:17.634037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.480 [2024-10-01 20:41:17.634051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:43:03.480 [2024-10-01 20:41:17.634078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.480 [2024-10-01 20:41:17.634093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:43:03.480 [2024-10-01 20:41:17.634139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.481 [2024-10-01 20:41:17.634156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:43:03.481 [2024-10-01 20:41:17.634177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.481 [2024-10-01 20:41:17.634192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:43:03.481 [2024-10-01 20:41:17.634212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.481 [2024-10-01 20:41:17.634227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:03.481 [2024-10-01 20:41:17.634248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.481 [2024-10-01 20:41:17.634263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:43:03.481 [2024-10-01 20:41:17.634283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.481 [2024-10-01 20:41:17.634298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:43:03.481 [2024-10-01 20:41:17.634321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.481 [2024-10-01 20:41:17.634337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:43:03.481 [2024-10-01 20:41:17.634357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.481 [2024-10-01 20:41:17.634372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:43:03.481 [2024-10-01 20:41:17.634392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.481 [2024-10-01 20:41:17.634408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:43:03.481 [2024-10-01 20:41:17.634428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.481 [2024-10-01 20:41:17.634443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:43:03.481 [2024-10-01 20:41:17.634464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.481 [2024-10-01 20:41:17.634479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:43:03.481 [2024-10-01 20:41:17.634499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.481 [2024-10-01 20:41:17.634514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:43:03.481 [2024-10-01 20:41:17.634568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.481 [2024-10-01 20:41:17.634600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:43:03.481 [2024-10-01 20:41:17.634619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.481 [2024-10-01 20:41:17.634633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:43:03.481 [2024-10-01 20:41:17.634655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.481 [2024-10-01 20:41:17.634670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:43:03.481 [2024-10-01 20:41:17.634690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.481 [2024-10-01 20:41:17.634704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:43:03.481 [2024-10-01 20:41:17.634723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.481 [2024-10-01 20:41:17.634737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:43:03.481 [2024-10-01 20:41:17.634757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.481 [2024-10-01 20:41:17.646193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:43:03.481 [2024-10-01 20:41:17.646261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.481 [2024-10-01 20:41:17.646287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:03.481 [2024-10-01 20:41:17.646317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.481 [2024-10-01 20:41:17.646337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:43:03.481 [2024-10-01 20:41:17.646365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.481 [2024-10-01 20:41:17.646385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:43:03.481 [2024-10-01 20:41:17.646413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.481 [2024-10-01 20:41:17.646432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:43:03.481 [2024-10-01 20:41:17.646466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.481 [2024-10-01 20:41:17.646486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:43:03.481 [2024-10-01 20:41:17.646519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.481 [2024-10-01 20:41:17.646539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:43:03.481 [2024-10-01 20:41:17.646566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.481 [2024-10-01 20:41:17.646603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:43:03.481 [2024-10-01 20:41:17.646633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.481 [2024-10-01 20:41:17.646653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:43:03.481 [2024-10-01 20:41:17.646681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.481 [2024-10-01 20:41:17.646700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:43:03.481 [2024-10-01 20:41:17.646728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.481 [2024-10-01 20:41:17.646747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:43:03.481 [2024-10-01 20:41:17.646774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.481 [2024-10-01 20:41:17.646794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:43:03.481 [2024-10-01 20:41:17.646821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.481 [2024-10-01 20:41:17.646841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:43:03.481 [2024-10-01 20:41:17.646869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.481 [2024-10-01 20:41:17.646889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:43:03.481 [2024-10-01 20:41:17.646916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.481 [2024-10-01 20:41:17.646935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:43:03.481 [2024-10-01 20:41:17.646963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.481 [2024-10-01 20:41:17.646982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:43:03.481 [2024-10-01 20:41:17.647009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.482 [2024-10-01 20:41:17.647029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:03.482 [2024-10-01 20:41:17.647056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.482 [2024-10-01 20:41:17.647075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:43:03.482 [2024-10-01 20:41:17.647103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.482 [2024-10-01 20:41:17.647151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:43:03.482 [2024-10-01 20:41:17.648129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.482 [2024-10-01 20:41:17.648191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:43:03.482 [2024-10-01 20:41:17.648228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.482 [2024-10-01 20:41:17.648251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:43:03.482 [2024-10-01 20:41:17.648279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.482 [2024-10-01 20:41:17.648300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:43:03.482 [2024-10-01 20:41:17.648327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.482 [2024-10-01 20:41:17.648347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:43:03.482 [2024-10-01 20:41:17.648374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.482 [2024-10-01 20:41:17.648393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:43:03.482 [2024-10-01 20:41:17.648421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.482 [2024-10-01 20:41:17.648441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:43:03.482 [2024-10-01 20:41:17.648468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.482 [2024-10-01 20:41:17.648488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:43:03.482 [2024-10-01 20:41:17.648515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.482 [2024-10-01 20:41:17.648535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:43:03.482 [2024-10-01 20:41:17.648570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.482 [2024-10-01 20:41:17.648589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:43:03.482 [2024-10-01 20:41:17.648616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.482 [2024-10-01 20:41:17.648636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:43:03.482 [2024-10-01 20:41:17.648663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.482 [2024-10-01 20:41:17.648683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:43:03.482 [2024-10-01 20:41:17.648711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.482 [2024-10-01 20:41:17.648730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:43:03.482 [2024-10-01 20:41:17.648757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.482 [2024-10-01 20:41:17.648777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:43:03.482 [2024-10-01 20:41:17.648815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.482 [2024-10-01 20:41:17.648836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.482 [2024-10-01 20:41:17.648863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.482 [2024-10-01 20:41:17.648882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:03.482 [2024-10-01 20:41:17.648910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.482 [2024-10-01 20:41:17.648929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:43:03.482 [2024-10-01 20:41:17.648956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.482 [2024-10-01 20:41:17.648976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:43:03.482 [2024-10-01 20:41:17.649002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.482 [2024-10-01 20:41:17.649022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:43:03.482 [2024-10-01 20:41:17.649050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.482 [2024-10-01 20:41:17.649069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:43:03.482 [2024-10-01 20:41:17.649096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.482 [2024-10-01 20:41:17.649139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:43:03.482 [2024-10-01 20:41:17.649180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.482 [2024-10-01 20:41:17.649200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:43:03.482 [2024-10-01 20:41:17.649227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.482 [2024-10-01 20:41:17.649247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:43:03.482 [2024-10-01 20:41:17.649274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.482 [2024-10-01 20:41:17.649294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:43:03.482 [2024-10-01 20:41:17.649320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.483 [2024-10-01 20:41:17.649340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:43:03.483 [2024-10-01 20:41:17.649367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.483 [2024-10-01 20:41:17.649387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:43:03.483 [2024-10-01 20:41:17.649424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.483 [2024-10-01 20:41:17.649445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:43:03.483 [2024-10-01 20:41:17.649473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.483 [2024-10-01 20:41:17.649492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:43:03.483 [2024-10-01 20:41:17.649523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.483 [2024-10-01 20:41:17.649543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:43:03.483 [2024-10-01 20:41:17.649570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.483 [2024-10-01 20:41:17.649590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:43:03.483 [2024-10-01 20:41:17.649617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.483 [2024-10-01 20:41:17.649637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:43:03.483 [2024-10-01 20:41:17.649665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.483 [2024-10-01 20:41:17.649685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:43:03.483 [2024-10-01 20:41:17.649713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.483 [2024-10-01 20:41:17.649733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:43:03.483 [2024-10-01 20:41:17.649759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.483 [2024-10-01 20:41:17.649779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:43:03.483 [2024-10-01 20:41:17.649806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.483 [2024-10-01 20:41:17.649826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:43:03.483 [2024-10-01 20:41:17.649853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.483 [2024-10-01 20:41:17.649872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:43:03.483 [2024-10-01 20:41:17.649899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.483 [2024-10-01 20:41:17.649919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:43:03.483 [2024-10-01 20:41:17.649946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.483 [2024-10-01 20:41:17.649966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:43:03.483 [2024-10-01 20:41:17.649994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.483 [2024-10-01 20:41:17.650021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:43:03.483 [2024-10-01 20:41:17.650050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.483 [2024-10-01 20:41:17.650070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:43:03.483 [2024-10-01 20:41:17.650097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.483 [2024-10-01 20:41:17.650134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:43:03.483 [2024-10-01 20:41:17.650164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.483 [2024-10-01 20:41:17.650184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:43:03.483 [2024-10-01 20:41:17.650211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.483 [2024-10-01 20:41:17.650232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:43:03.483 [2024-10-01 20:41:17.650259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.483 [2024-10-01 20:41:17.650278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:43:03.483 [2024-10-01 20:41:17.650307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.483 [2024-10-01 20:41:17.650326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:43:03.483 [2024-10-01 20:41:17.650353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.483 [2024-10-01 20:41:17.650373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:43:03.483 [2024-10-01 20:41:17.650400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.483 [2024-10-01 20:41:17.650420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:43:03.483 [2024-10-01 20:41:17.650447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.483 [2024-10-01 20:41:17.650467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:03.483 [2024-10-01 20:41:17.650494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.483 [2024-10-01 20:41:17.650518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:43:03.483 [2024-10-01 20:41:17.650545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.483 [2024-10-01 20:41:17.650564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:43:03.483 [2024-10-01 20:41:17.650591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.483 [2024-10-01 20:41:17.650611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:43:03.483 [2024-10-01 20:41:17.650651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.483 [2024-10-01 20:41:17.650672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:43:03.483 [2024-10-01 20:41:17.650699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.483 [2024-10-01 20:41:17.650719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:43:03.483 [2024-10-01 20:41:17.650746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.483 [2024-10-01 20:41:17.650766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:43:03.483 [2024-10-01 20:41:17.650793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.483 [2024-10-01 20:41:17.650813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:43:03.484 [2024-10-01 20:41:17.650840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.484 [2024-10-01 20:41:17.650859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:43:03.484 [2024-10-01 20:41:17.650886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.484 [2024-10-01 20:41:17.650906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:43:03.484 [2024-10-01 20:41:17.650934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.484 [2024-10-01 20:41:17.650953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:43:03.484 [2024-10-01 20:41:17.650980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.484 [2024-10-01 20:41:17.651000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:43:03.484 [2024-10-01 20:41:17.651027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.484 [2024-10-01 20:41:17.651047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:43:03.484 [2024-10-01 20:41:17.651074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.484 [2024-10-01 20:41:17.651093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:43:03.484 [2024-10-01 20:41:17.651144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.484 [2024-10-01 20:41:17.651167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:43:03.484 [2024-10-01 20:41:17.651194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.484 [2024-10-01 20:41:17.651214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:43:03.484 [2024-10-01 20:41:17.651252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.484 [2024-10-01 20:41:17.651274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:43:03.484 [2024-10-01 20:41:17.651301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.484 [2024-10-01 20:41:17.651322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:43:03.484 [2024-10-01 20:41:17.652334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.484 [2024-10-01 20:41:17.652370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:43:03.484 [2024-10-01 20:41:17.652404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.484 [2024-10-01 20:41:17.652426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:43:03.484 [2024-10-01 20:41:17.652454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.484 [2024-10-01 20:41:17.652475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:43:03.484 [2024-10-01 20:41:17.652502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.484 [2024-10-01 20:41:17.652522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:43:03.484 [2024-10-01 20:41:17.652549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.484 [2024-10-01 20:41:17.652569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:43:03.484 [2024-10-01 20:41:17.652596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.484 [2024-10-01 20:41:17.652616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:43:03.484 [2024-10-01 20:41:17.652643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.484 [2024-10-01 20:41:17.652663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:43:03.484 [2024-10-01 20:41:17.652690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.484 [2024-10-01 20:41:17.652710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:43:03.484 [2024-10-01 20:41:17.652739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.484 [2024-10-01 20:41:17.652759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:43:03.484 [2024-10-01 20:41:17.652786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.484 [2024-10-01 20:41:17.652806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:43:03.484 [2024-10-01 20:41:17.652834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.484 [2024-10-01 20:41:17.652867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:43:03.484 [2024-10-01 20:41:17.652897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.484 [2024-10-01 20:41:17.652917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:43:03.484 [2024-10-01 20:41:17.652945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.484 [2024-10-01 20:41:17.652964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:43:03.484 [2024-10-01 20:41:17.652992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.484 [2024-10-01 20:41:17.653011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:43:03.484 [2024-10-01 20:41:17.653039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.484 [2024-10-01 20:41:17.653058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:03.484 [2024-10-01 20:41:17.653085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.484 [2024-10-01 20:41:17.653105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:43:03.484 [2024-10-01 20:41:17.653163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.484 [2024-10-01 20:41:17.653184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:43:03.484 [2024-10-01 20:41:17.653211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.484 [2024-10-01 20:41:17.653231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:43:03.484 [2024-10-01 20:41:17.653258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.484 [2024-10-01 20:41:17.653277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:43:03.484 [2024-10-01 20:41:17.653305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.485 [2024-10-01 20:41:17.653324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:43:03.485 [2024-10-01 20:41:17.653351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.485 [2024-10-01 20:41:17.653371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:43:03.485 [2024-10-01 20:41:17.653398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.485 [2024-10-01 20:41:17.653418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:43:03.485 [2024-10-01 20:41:17.653445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.485 [2024-10-01 20:41:17.653474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:43:03.485 [2024-10-01 20:41:17.653502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.485 [2024-10-01 20:41:17.653523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:43:03.485 [2024-10-01 20:41:17.653551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.485 [2024-10-01 20:41:17.653571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:43:03.485 [2024-10-01 20:41:17.653598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.485 [2024-10-01 20:41:17.653618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:43:03.485 [2024-10-01 20:41:17.653645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.485 [2024-10-01 20:41:17.653665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:43:03.485 [2024-10-01 20:41:17.653692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.485 [2024-10-01 20:41:17.653712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:43:03.485 [2024-10-01 20:41:17.653740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.485 [2024-10-01 20:41:17.653759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:43:03.485 [2024-10-01 20:41:17.653786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.485 [2024-10-01 20:41:17.653806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:43:03.485 [2024-10-01 20:41:17.653833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.485 [2024-10-01 20:41:17.653853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:43:03.485 [2024-10-01 20:41:17.653880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.485 [2024-10-01 20:41:17.653899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:03.485 [2024-10-01 20:41:17.653927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.485 [2024-10-01 20:41:17.653947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:43:03.485 [2024-10-01 20:41:17.653974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.485 [2024-10-01 20:41:17.653994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:43:03.485 [2024-10-01 20:41:17.654021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.485 [2024-10-01 20:41:17.654040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:43:03.485 [2024-10-01 20:41:17.654076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.485 [2024-10-01 20:41:17.654097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:43:03.485 [2024-10-01 20:41:17.654139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.485 [2024-10-01 20:41:17.654161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:43:03.485 [2024-10-01 20:41:17.654188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.485 [2024-10-01 20:41:17.654208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:43:03.485 [2024-10-01 20:41:17.654235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.485 [2024-10-01 20:41:17.654255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:43:03.485 [2024-10-01 20:41:17.654282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.485 [2024-10-01 20:41:17.654302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:43:03.485 [2024-10-01 20:41:17.654330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.485 [2024-10-01 20:41:17.654350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:43:03.485 [2024-10-01 20:41:17.654377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.485 [2024-10-01 20:41:17.654397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:43:03.485 [2024-10-01 20:41:17.654425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.485 [2024-10-01 20:41:17.654445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:43:03.485 [2024-10-01 20:41:17.654473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.485 [2024-10-01 20:41:17.654493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:43:03.485 [2024-10-01 20:41:17.654520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.485 [2024-10-01 20:41:17.654540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:43:03.485 [2024-10-01 20:41:17.654567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.485 [2024-10-01 20:41:17.654587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:43:03.485 [2024-10-01 20:41:17.654614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.485 [2024-10-01 20:41:17.654634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:03.485 [2024-10-01 20:41:17.654670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.485 [2024-10-01 20:41:17.654692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:43:03.485 [2024-10-01 20:41:17.654719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.486 [2024-10-01 20:41:17.654739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:43:03.486 [2024-10-01 20:41:17.654766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.486 [2024-10-01 20:41:17.654786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:43:03.486 [2024-10-01 20:41:17.654813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.486 [2024-10-01 20:41:17.654833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:43:03.486 [2024-10-01 20:41:17.654860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.486 [2024-10-01 20:41:17.654880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:43:03.486 [2024-10-01 20:41:17.654907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.486 [2024-10-01 20:41:17.654927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:43:03.486 [2024-10-01 20:41:17.654954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.486 [2024-10-01 20:41:17.654974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:43:03.486 [2024-10-01 20:41:17.655001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.486 [2024-10-01 20:41:17.655021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:43:03.486 [2024-10-01 20:41:17.655048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.486 [2024-10-01 20:41:17.655068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:43:03.486 [2024-10-01 20:41:17.655096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.486 [2024-10-01 20:41:17.655143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:43:03.486 [2024-10-01 20:41:17.655174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.486 [2024-10-01 20:41:17.655194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:43:03.486 [2024-10-01 20:41:17.655222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.486 [2024-10-01 20:41:17.655242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:43:03.486 [2024-10-01 20:41:17.655269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.486 [2024-10-01 20:41:17.655298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:43:03.486 [2024-10-01 20:41:17.655327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.486 [2024-10-01 20:41:17.655348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:43:03.486 [2024-10-01 20:41:17.655375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.486 [2024-10-01 20:41:17.655395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:03.486 [2024-10-01 20:41:17.655423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.486 [2024-10-01 20:41:17.655443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:43:03.486 [2024-10-01 20:41:17.656273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.486 [2024-10-01 20:41:17.656298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:43:03.486 [2024-10-01 20:41:17.656322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.486 [2024-10-01 20:41:17.656338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:43:03.486 [2024-10-01 20:41:17.656357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.486 [2024-10-01 20:41:17.656372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:43:03.486 [2024-10-01 20:41:17.656391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.486 [2024-10-01 20:41:17.656405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:43:03.486 [2024-10-01 20:41:17.656424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.486 [2024-10-01 20:41:17.656439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:43:03.486 [2024-10-01 20:41:17.656458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.486 [2024-10-01 20:41:17.656472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:43:03.486 [2024-10-01 20:41:17.656491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.486 [2024-10-01 20:41:17.656505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:43:03.486 [2024-10-01 20:41:17.656524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.486 [2024-10-01 20:41:17.656538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:43:03.486 [2024-10-01 20:41:17.656557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.486 [2024-10-01 20:41:17.656581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:43:03.486 [2024-10-01 20:41:17.656603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.486 [2024-10-01 20:41:17.656618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:43:03.486 [2024-10-01 20:41:17.656637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.486 [2024-10-01 20:41:17.656652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:43:03.486 [2024-10-01 20:41:17.656671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.486 [2024-10-01 20:41:17.656686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:43:03.486 [2024-10-01 20:41:17.656705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.486 [2024-10-01 20:41:17.656719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:43:03.486 [2024-10-01 20:41:17.656738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.486 [2024-10-01 20:41:17.656753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:43:03.486 [2024-10-01 20:41:17.656772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.486 [2024-10-01 20:41:17.656786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.486 [2024-10-01 20:41:17.656805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.486 [2024-10-01 20:41:17.656820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:03.486 [2024-10-01 20:41:17.656839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.486 [2024-10-01 20:41:17.656853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:43:03.486 [2024-10-01 20:41:17.656872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.487 [2024-10-01 20:41:17.656886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:43:03.487 [2024-10-01 20:41:17.656905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.487 [2024-10-01 20:41:17.656920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:43:03.487 [2024-10-01 20:41:17.656939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.487 [2024-10-01 20:41:17.656953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:43:03.487 [2024-10-01 20:41:17.656972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.487 [2024-10-01 20:41:17.656986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:43:03.487 [2024-10-01 20:41:17.657012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.487 [2024-10-01 20:41:17.657028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:43:03.487 [2024-10-01 20:41:17.657047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.487 [2024-10-01 20:41:17.657062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:43:03.487 [2024-10-01 20:41:17.657081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.487 [2024-10-01 20:41:17.657095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:43:03.487 [2024-10-01 20:41:17.657115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.487 [2024-10-01 20:41:17.657145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:43:03.487 [2024-10-01 20:41:17.657181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.487 [2024-10-01 20:41:17.657197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:43:03.487 [2024-10-01 20:41:17.657217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.487 [2024-10-01 20:41:17.657231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:43:03.487 [2024-10-01 20:41:17.657251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.487 [2024-10-01 20:41:17.657266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:43:03.487 [2024-10-01 20:41:17.657286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.487 [2024-10-01 20:41:17.657301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:43:03.487 [2024-10-01 20:41:17.657321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.487 [2024-10-01 20:41:17.657335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:43:03.487 [2024-10-01 20:41:17.657355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.487 [2024-10-01 20:41:17.657369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:43:03.487 [2024-10-01 20:41:17.657389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.487 [2024-10-01 20:41:17.657404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:43:03.487 [2024-10-01 20:41:17.657424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.487 [2024-10-01 20:41:17.657438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:43:03.487 [2024-10-01 20:41:17.657457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.487 [2024-10-01 20:41:17.657479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:43:03.487 [2024-10-01 20:41:17.657500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.487 [2024-10-01 20:41:17.657515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:43:03.487 [2024-10-01 20:41:17.657549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.487 [2024-10-01 20:41:17.657564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:43:03.487 [2024-10-01 20:41:17.657583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.487 [2024-10-01 20:41:17.657597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:43:03.487 [2024-10-01 20:41:17.657616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.487 [2024-10-01 20:41:17.657630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:43:03.487 [2024-10-01 20:41:17.657649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.487 [2024-10-01 20:41:17.657663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:43:03.487 [2024-10-01 20:41:17.657683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.487 [2024-10-01 20:41:17.657697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:43:03.487 [2024-10-01 20:41:17.657715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.487 [2024-10-01 20:41:17.657730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:43:03.487 [2024-10-01 20:41:17.657749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.487 [2024-10-01 20:41:17.657763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:43:03.487 [2024-10-01 20:41:17.657782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.487 [2024-10-01 20:41:17.657797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:43:03.487 [2024-10-01 20:41:17.657816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.487 [2024-10-01 20:41:17.657830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:43:03.487 [2024-10-01 20:41:17.657849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.487 [2024-10-01 20:41:17.657864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:43:03.487 [2024-10-01 20:41:17.657883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.487 [2024-10-01 20:41:17.657903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:43:03.487 [2024-10-01 20:41:17.657923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.487 [2024-10-01 20:41:17.657938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:43:03.487 [2024-10-01 20:41:17.657960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.488 [2024-10-01 20:41:17.657975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:03.488 [2024-10-01 20:41:17.657994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.488 [2024-10-01 20:41:17.658008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:43:03.488 [2024-10-01 20:41:17.658027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.488 [2024-10-01 20:41:17.658042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:43:03.488 [2024-10-01 20:41:17.658061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.488 [2024-10-01 20:41:17.658076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:43:03.488 [2024-10-01 20:41:17.658094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.488 [2024-10-01 20:41:17.658108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:43:03.488 [2024-10-01 20:41:17.658155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.488 [2024-10-01 20:41:17.658173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:43:03.488 [2024-10-01 20:41:17.658193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.488 [2024-10-01 20:41:17.658208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:43:03.488 [2024-10-01 20:41:17.658227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.488 [2024-10-01 20:41:17.658242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:43:03.488 [2024-10-01 20:41:17.658262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.488 [2024-10-01 20:41:17.658276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:43:03.488 [2024-10-01 20:41:17.658296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.488 [2024-10-01 20:41:17.658310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:43:03.488 [2024-10-01 20:41:17.658330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.488 [2024-10-01 20:41:17.658344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:43:03.488 [2024-10-01 20:41:17.658372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.488 [2024-10-01 20:41:17.658387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:43:03.488 [2024-10-01 20:41:17.658407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.488 [2024-10-01 20:41:17.658422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:43:03.488 [2024-10-01 20:41:17.658442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.488 [2024-10-01 20:41:17.658457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:43:03.488 [2024-10-01 20:41:17.658476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.488 [2024-10-01 20:41:17.658491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:43:03.488 [2024-10-01 20:41:17.658524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.488 [2024-10-01 20:41:17.658538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:43:03.488 [2024-10-01 20:41:17.658559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.488 [2024-10-01 20:41:17.658574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:43:03.488 [2024-10-01 20:41:17.659261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.488 [2024-10-01 20:41:17.659287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:43:03.488 [2024-10-01 20:41:17.659311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.488 [2024-10-01 20:41:17.659327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:43:03.488 [2024-10-01 20:41:17.659348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.488 [2024-10-01 20:41:17.659362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:43:03.488 [2024-10-01 20:41:17.659381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.488 [2024-10-01 20:41:17.659395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:43:03.488 [2024-10-01 20:41:17.659414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.488 [2024-10-01 20:41:17.659429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:43:03.488 [2024-10-01 20:41:17.659448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.488 [2024-10-01 20:41:17.659462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:43:03.488 [2024-10-01 20:41:17.659492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.488 [2024-10-01 20:41:17.659507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:43:03.488 [2024-10-01 20:41:17.659527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.488 [2024-10-01 20:41:17.659541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:43:03.488 [2024-10-01 20:41:17.659560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.488 [2024-10-01 20:41:17.659574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:43:03.488 [2024-10-01 20:41:17.659592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.488 [2024-10-01 20:41:17.659606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:43:03.488 [2024-10-01 20:41:17.659626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.489 [2024-10-01 20:41:17.659640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:43:03.489 [2024-10-01 20:41:17.659659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.489 [2024-10-01 20:41:17.659673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:43:03.489 [2024-10-01 20:41:17.659692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.489 [2024-10-01 20:41:17.659706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:43:03.489 [2024-10-01 20:41:17.659725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.489 [2024-10-01 20:41:17.659739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:43:03.489 [2024-10-01 20:41:17.659758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.489 [2024-10-01 20:41:17.659772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:43:03.489 [2024-10-01 20:41:17.659791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.489 [2024-10-01 20:41:17.659805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:03.489 [2024-10-01 20:41:17.659833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.489 [2024-10-01 20:41:17.659866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:43:03.489 [2024-10-01 20:41:17.659887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.489 [2024-10-01 20:41:17.659902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:43:03.489 [2024-10-01 20:41:17.659922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.489 [2024-10-01 20:41:17.659944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:43:03.489 [2024-10-01 20:41:17.659966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.489 [2024-10-01 20:41:17.659981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:43:03.489 [2024-10-01 20:41:17.660002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.489 [2024-10-01 20:41:17.660016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:43:03.489 [2024-10-01 20:41:17.660037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.489 [2024-10-01 20:41:17.660051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:43:03.489 [2024-10-01 20:41:17.660072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.489 [2024-10-01 20:41:17.660087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:43:03.489 [2024-10-01 20:41:17.660107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.489 [2024-10-01 20:41:17.660122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:43:03.489 [2024-10-01 20:41:17.660172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.489 [2024-10-01 20:41:17.660202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:43:03.489 [2024-10-01 20:41:17.660221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.489 [2024-10-01 20:41:17.660236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:43:03.489 [2024-10-01 20:41:17.660254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.489 [2024-10-01 20:41:17.660269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:43:03.489 [2024-10-01 20:41:17.660289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.489 [2024-10-01 20:41:17.660303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:43:03.489 [2024-10-01 20:41:17.660322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.489 [2024-10-01 20:41:17.660336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:43:03.489 [2024-10-01 20:41:17.660355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.489 [2024-10-01 20:41:17.660369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:43:03.489 [2024-10-01 20:41:17.660388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.489 [2024-10-01 20:41:17.660409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:43:03.489 [2024-10-01 20:41:17.660430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.489 [2024-10-01 20:41:17.660444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:43:03.489 [2024-10-01 20:41:17.660464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.489 [2024-10-01 20:41:17.660478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:03.489 [2024-10-01 20:41:17.660497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.489 [2024-10-01 20:41:17.660512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:43:03.489 [2024-10-01 20:41:17.660531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.489 [2024-10-01 20:41:17.660545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:43:03.489 [2024-10-01 20:41:17.660564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.489 [2024-10-01 20:41:17.660578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:43:03.489 [2024-10-01 20:41:17.660597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.489 [2024-10-01 20:41:17.660611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:43:03.489 [2024-10-01 20:41:17.660630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.489 [2024-10-01 20:41:17.660644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:43:03.489 [2024-10-01 20:41:17.660664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.489 [2024-10-01 20:41:17.660678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:43:03.489 [2024-10-01 20:41:17.660697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.489 [2024-10-01 20:41:17.660711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:43:03.489 [2024-10-01 20:41:17.660730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.489 [2024-10-01 20:41:17.660744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:43:03.489 [2024-10-01 20:41:17.660763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.489 [2024-10-01 20:41:17.660777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:43:03.489 [2024-10-01 20:41:17.660796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.489 [2024-10-01 20:41:17.660810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:43:03.489 [2024-10-01 20:41:17.660840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.490 [2024-10-01 20:41:17.660856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:43:03.490 [2024-10-01 20:41:17.660875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.490 [2024-10-01 20:41:17.660890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:43:03.490 [2024-10-01 20:41:17.660909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.490 [2024-10-01 20:41:17.660923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:43:03.490 [2024-10-01 20:41:17.660942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.490 [2024-10-01 20:41:17.660956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:43:03.490 [2024-10-01 20:41:17.660976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.490 [2024-10-01 20:41:17.660990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:03.490 [2024-10-01 20:41:17.661009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.490 [2024-10-01 20:41:17.661024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:43:03.490 [2024-10-01 20:41:17.661043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.490 [2024-10-01 20:41:17.661057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:43:03.490 [2024-10-01 20:41:17.661076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.490 [2024-10-01 20:41:17.661090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:43:03.490 [2024-10-01 20:41:17.661109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.490 [2024-10-01 20:41:17.661155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:43:03.490 [2024-10-01 20:41:17.661186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.490 [2024-10-01 20:41:17.661203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:43:03.490 [2024-10-01 20:41:17.661223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.490 [2024-10-01 20:41:17.661238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:43:03.490 [2024-10-01 20:41:17.661259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.490 [2024-10-01 20:41:17.661274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:43:03.490 [2024-10-01 20:41:17.661302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.490 [2024-10-01 20:41:17.661318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:43:03.490 [2024-10-01 20:41:17.661338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.490 [2024-10-01 20:41:17.661354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:43:03.490 [2024-10-01 20:41:17.661374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.490 [2024-10-01 20:41:17.661389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:43:03.490 [2024-10-01 20:41:17.661409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.490 [2024-10-01 20:41:17.661423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:43:03.490 [2024-10-01 20:41:17.661444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.490 [2024-10-01 20:41:17.661460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:43:03.490 [2024-10-01 20:41:17.661481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.490 [2024-10-01 20:41:17.661495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:43:03.490 [2024-10-01 20:41:17.661545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.490 [2024-10-01 20:41:17.661559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:43:03.490 [2024-10-01 20:41:17.661578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.490 [2024-10-01 20:41:17.661593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:03.490 [2024-10-01 20:41:17.662220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.490 [2024-10-01 20:41:17.662246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:43:03.490 [2024-10-01 20:41:17.662271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.490 [2024-10-01 20:41:17.662287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:43:03.490 [2024-10-01 20:41:17.662308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.490 [2024-10-01 20:41:17.662323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:43:03.490 [2024-10-01 20:41:17.662342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.490 [2024-10-01 20:41:17.662357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:43:03.490 [2024-10-01 20:41:17.662376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.490 [2024-10-01 20:41:17.662401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:43:03.490 [2024-10-01 20:41:17.662423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.490 [2024-10-01 20:41:17.662438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:43:03.490 [2024-10-01 20:41:17.662457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.490 [2024-10-01 20:41:17.662472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:43:03.490 [2024-10-01 20:41:17.662492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.490 [2024-10-01 20:41:17.662507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:43:03.490 [2024-10-01 20:41:17.662526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.490 [2024-10-01 20:41:17.662555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:43:03.490 [2024-10-01 20:41:17.662574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.491 [2024-10-01 20:41:17.662588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:43:03.491 [2024-10-01 20:41:17.662607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.491 [2024-10-01 20:41:17.662621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:43:03.491 [2024-10-01 20:41:17.662640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.491 [2024-10-01 20:41:17.662654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:43:03.491 [2024-10-01 20:41:17.662674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.491 [2024-10-01 20:41:17.662688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:43:03.491 [2024-10-01 20:41:17.662707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.491 [2024-10-01 20:41:17.662722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:43:03.491 [2024-10-01 20:41:17.662757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.491 [2024-10-01 20:41:17.662771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:43:03.491 [2024-10-01 20:41:17.662791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.491 [2024-10-01 20:41:17.662805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.491 [2024-10-01 20:41:17.662825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.491 [2024-10-01 20:41:17.662840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:03.491 [2024-10-01 20:41:17.662867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.491 [2024-10-01 20:41:17.662883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:43:03.491 [2024-10-01 20:41:17.662903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.491 [2024-10-01 20:41:17.662917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:43:03.491 [2024-10-01 20:41:17.662937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.491 [2024-10-01 20:41:17.662952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:43:03.491 [2024-10-01 20:41:17.662971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.491 [2024-10-01 20:41:17.662986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:43:03.491 [2024-10-01 20:41:17.663005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.491 [2024-10-01 20:41:17.663020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:43:03.491 [2024-10-01 20:41:17.663039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.491 [2024-10-01 20:41:17.663054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:43:03.491 [2024-10-01 20:41:17.663088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.491 [2024-10-01 20:41:17.663103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:43:03.491 [2024-10-01 20:41:17.663122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.491 [2024-10-01 20:41:17.663136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:43:03.491 [2024-10-01 20:41:17.663155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.491 [2024-10-01 20:41:17.663182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:43:03.491 [2024-10-01 20:41:17.663205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.491 [2024-10-01 20:41:17.663220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:43:03.491 [2024-10-01 20:41:17.663239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.491 [2024-10-01 20:41:17.663254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:43:03.491 [2024-10-01 20:41:17.663273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.491 [2024-10-01 20:41:17.663288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:43:03.491 [2024-10-01 20:41:17.663314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.491 [2024-10-01 20:41:17.663330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:43:03.491 [2024-10-01 20:41:17.663349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.491 [2024-10-01 20:41:17.663363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:43:03.491 [2024-10-01 20:41:17.663383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.491 [2024-10-01 20:41:17.663397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:43:03.491 [2024-10-01 20:41:17.663416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.491 [2024-10-01 20:41:17.663430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:43:03.491 [2024-10-01 20:41:17.663450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.491 [2024-10-01 20:41:17.663464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:43:03.491 [2024-10-01 20:41:17.663483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.491 [2024-10-01 20:41:17.663497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:43:03.491 [2024-10-01 20:41:17.663516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.491 [2024-10-01 20:41:17.663530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:43:03.491 [2024-10-01 20:41:17.663549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.491 [2024-10-01 20:41:17.663564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:43:03.491 [2024-10-01 20:41:17.663582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.491 [2024-10-01 20:41:17.663597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:43:03.491 [2024-10-01 20:41:17.663616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.491 [2024-10-01 20:41:17.663630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:43:03.491 [2024-10-01 20:41:17.663649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.491 [2024-10-01 20:41:17.663663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:43:03.491 [2024-10-01 20:41:17.663682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.492 [2024-10-01 20:41:17.663696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:43:03.492 [2024-10-01 20:41:17.663715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.492 [2024-10-01 20:41:17.663736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:43:03.492 [2024-10-01 20:41:17.663757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.492 [2024-10-01 20:41:17.663771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:43:03.492 [2024-10-01 20:41:17.663790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.492 [2024-10-01 20:41:17.663805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:43:03.492 [2024-10-01 20:41:17.663832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.492 [2024-10-01 20:41:17.663868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:43:03.492 [2024-10-01 20:41:17.663892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.492 [2024-10-01 20:41:17.663908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:43:03.492 [2024-10-01 20:41:17.663928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.492 [2024-10-01 20:41:17.663943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:43:03.492 [2024-10-01 20:41:17.663964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.492 [2024-10-01 20:41:17.663979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:43:03.492 [2024-10-01 20:41:17.663999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.492 [2024-10-01 20:41:17.664014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:03.492 [2024-10-01 20:41:17.664034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.492 [2024-10-01 20:41:17.664049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:43:03.492 [2024-10-01 20:41:17.664069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.492 [2024-10-01 20:41:17.664084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:43:03.492 [2024-10-01 20:41:17.664104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.492 [2024-10-01 20:41:17.664120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:43:03.492 [2024-10-01 20:41:17.664168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.492 [2024-10-01 20:41:17.664184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:43:03.492 [2024-10-01 20:41:17.664218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.492 [2024-10-01 20:41:17.664239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:43:03.492 [2024-10-01 20:41:17.664260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.492 [2024-10-01 20:41:17.664275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:43:03.492 [2024-10-01 20:41:17.664294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.492 [2024-10-01 20:41:17.664308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:43:03.492 [2024-10-01 20:41:17.664327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.492 [2024-10-01 20:41:17.664341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:43:03.492 [2024-10-01 20:41:17.664362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.492 [2024-10-01 20:41:17.664376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:43:03.492 [2024-10-01 20:41:17.664395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.492 [2024-10-01 20:41:17.664409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:43:03.492 [2024-10-01 20:41:17.664428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.492 [2024-10-01 20:41:17.664442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:43:03.492 [2024-10-01 20:41:17.664461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.492 [2024-10-01 20:41:17.664477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:43:03.492 [2024-10-01 20:41:17.664499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.492 [2024-10-01 20:41:17.664514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:43:03.492 [2024-10-01 20:41:17.664533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.492 [2024-10-01 20:41:17.664547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:43:03.492 [2024-10-01 20:41:17.664567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.492 [2024-10-01 20:41:17.664582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:43:03.492 [2024-10-01 20:41:17.665248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.492 [2024-10-01 20:41:17.665274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:43:03.492 [2024-10-01 20:41:17.665298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.492 [2024-10-01 20:41:17.665313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:43:03.492 [2024-10-01 20:41:17.665343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.492 [2024-10-01 20:41:17.665359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:43:03.492 [2024-10-01 20:41:17.665379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.492 [2024-10-01 20:41:17.665393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:43:03.492 [2024-10-01 20:41:17.665412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.492 [2024-10-01 20:41:17.665426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:43:03.492 [2024-10-01 20:41:17.665445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.492 [2024-10-01 20:41:17.665459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:43:03.492 [2024-10-01 20:41:17.665478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.492 [2024-10-01 20:41:17.665493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:43:03.492 [2024-10-01 20:41:17.665512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.493 [2024-10-01 20:41:17.665541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:43:03.493 [2024-10-01 20:41:17.665561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.493 [2024-10-01 20:41:17.665575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:43:03.493 [2024-10-01 20:41:17.665595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.493 [2024-10-01 20:41:17.665609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:43:03.493 [2024-10-01 20:41:17.665629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.493 [2024-10-01 20:41:17.665643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:43:03.493 [2024-10-01 20:41:17.665663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.493 [2024-10-01 20:41:17.665678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:43:03.493 [2024-10-01 20:41:17.665697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.493 [2024-10-01 20:41:17.665712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:43:03.493 [2024-10-01 20:41:17.665733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.493 [2024-10-01 20:41:17.665748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:43:03.493 [2024-10-01 20:41:17.665774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.493 [2024-10-01 20:41:17.665790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:43:03.493 [2024-10-01 20:41:17.665810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.493 [2024-10-01 20:41:17.665824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:43:03.493 [2024-10-01 20:41:17.665844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.493 [2024-10-01 20:41:17.665858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:03.493 [2024-10-01 20:41:17.665893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.493 [2024-10-01 20:41:17.665907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:43:03.493 [2024-10-01 20:41:17.665926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.493 [2024-10-01 20:41:17.665940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:43:03.493 [2024-10-01 20:41:17.665960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.493 [2024-10-01 20:41:17.665974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:43:03.493 [2024-10-01 20:41:17.665992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.493 [2024-10-01 20:41:17.666007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:43:03.493 [2024-10-01 20:41:17.666025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.493 [2024-10-01 20:41:17.666040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:43:03.493 [2024-10-01 20:41:17.666059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.493 [2024-10-01 20:41:17.666073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:43:03.493 [2024-10-01 20:41:17.666092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.493 [2024-10-01 20:41:17.666106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:43:03.493 [2024-10-01 20:41:17.666142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.493 [2024-10-01 20:41:17.666169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:43:03.493 [2024-10-01 20:41:17.666192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.493 [2024-10-01 20:41:17.666207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:43:03.493 [2024-10-01 20:41:17.666226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.493 [2024-10-01 20:41:17.666248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:43:03.493 [2024-10-01 20:41:17.666270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.493 [2024-10-01 20:41:17.666284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:43:03.493 [2024-10-01 20:41:17.666304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.493 [2024-10-01 20:41:17.666319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:43:03.493 [2024-10-01 20:41:17.666339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.493 [2024-10-01 20:41:17.666354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:43:03.493 [2024-10-01 20:41:17.666373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.493 [2024-10-01 20:41:17.666388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:43:03.493 [2024-10-01 20:41:17.666407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.493 [2024-10-01 20:41:17.666422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:43:03.493 [2024-10-01 20:41:17.666442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.493 [2024-10-01 20:41:17.666456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:43:03.493 [2024-10-01 20:41:17.666476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.494 [2024-10-01 20:41:17.666491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:03.494 [2024-10-01 20:41:17.666525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.494 [2024-10-01 20:41:17.666555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:43:03.494 [2024-10-01 20:41:17.666575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.494 [2024-10-01 20:41:17.666590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:43:03.494 [2024-10-01 20:41:17.666609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.494 [2024-10-01 20:41:17.666623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:43:03.494 [2024-10-01 20:41:17.666643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.494 [2024-10-01 20:41:17.666658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:43:03.494 [2024-10-01 20:41:17.666678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.494 [2024-10-01 20:41:17.666698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:43:03.494 [2024-10-01 20:41:17.666719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.494 [2024-10-01 20:41:17.666734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:43:03.494 [2024-10-01 20:41:17.666754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.494 [2024-10-01 20:41:17.666769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:43:03.494 [2024-10-01 20:41:17.666789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.494 [2024-10-01 20:41:17.666803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:43:03.494 [2024-10-01 20:41:17.674315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.494 [2024-10-01 20:41:17.674350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:43:03.494 [2024-10-01 20:41:17.674375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.494 [2024-10-01 20:41:17.674390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:43:03.494 [2024-10-01 20:41:17.674410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.494 [2024-10-01 20:41:17.674424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:43:03.494 [2024-10-01 20:41:17.674443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.494 [2024-10-01 20:41:17.674458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:43:03.494 [2024-10-01 20:41:17.674477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.494 [2024-10-01 20:41:17.674491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:43:03.494 [2024-10-01 20:41:17.674510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.494 [2024-10-01 20:41:17.674524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:43:03.494 [2024-10-01 20:41:17.674544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.494 [2024-10-01 20:41:17.674558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:03.494 [2024-10-01 20:41:17.674577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.494 [2024-10-01 20:41:17.674591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:43:03.494 [2024-10-01 20:41:17.674610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.494 [2024-10-01 20:41:17.674624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:43:03.494 [2024-10-01 20:41:17.674658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.494 [2024-10-01 20:41:17.674673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:43:03.494 [2024-10-01 20:41:17.674692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.494 [2024-10-01 20:41:17.674706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:43:03.494 [2024-10-01 20:41:17.674725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.494 [2024-10-01 20:41:17.674740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:43:03.494 [2024-10-01 20:41:17.674760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.494 [2024-10-01 20:41:17.674774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:43:03.494 [2024-10-01 20:41:17.674793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.494 [2024-10-01 20:41:17.674807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:43:03.494 [2024-10-01 20:41:17.674826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.494 [2024-10-01 20:41:17.674840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:43:03.494 [2024-10-01 20:41:17.674860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.494 [2024-10-01 20:41:17.674874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:43:03.494 [2024-10-01 20:41:17.674893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.494 [2024-10-01 20:41:17.674907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:43:03.494 [2024-10-01 20:41:17.674926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.494 [2024-10-01 20:41:17.674940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:43:03.494 [2024-10-01 20:41:17.674959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.494 [2024-10-01 20:41:17.674973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:43:03.494 [2024-10-01 20:41:17.674993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.494 [2024-10-01 20:41:17.675007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:43:03.494 [2024-10-01 20:41:17.675027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.494 [2024-10-01 20:41:17.675041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:43:03.494 [2024-10-01 20:41:17.675339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.494 [2024-10-01 20:41:17.675378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:03.494 [2024-10-01 20:41:17.675423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.494 [2024-10-01 20:41:17.675444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:43:03.494 [2024-10-01 20:41:17.675468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.495 [2024-10-01 20:41:17.675484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:43:03.495 [2024-10-01 20:41:17.675507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.495 [2024-10-01 20:41:17.675537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:43:03.495 [2024-10-01 20:41:17.675559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.495 [2024-10-01 20:41:17.675574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:43:03.495 [2024-10-01 20:41:17.675597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.495 [2024-10-01 20:41:17.675611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:43:03.495 [2024-10-01 20:41:17.675634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.495 [2024-10-01 20:41:17.675648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:43:03.495 [2024-10-01 20:41:17.675671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.495 [2024-10-01 20:41:17.675685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:43:03.495 [2024-10-01 20:41:17.675708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.495 [2024-10-01 20:41:17.675722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:43:03.495 [2024-10-01 20:41:17.675745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.495 [2024-10-01 20:41:17.675759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:43:03.495 [2024-10-01 20:41:17.675782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.495 [2024-10-01 20:41:17.675796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:43:03.495 [2024-10-01 20:41:17.675819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.495 [2024-10-01 20:41:17.675863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:43:03.495 [2024-10-01 20:41:17.675889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.495 [2024-10-01 20:41:17.675914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:43:03.495 [2024-10-01 20:41:17.675941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.495 [2024-10-01 20:41:17.675957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:43:03.495 [2024-10-01 20:41:17.675982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.495 [2024-10-01 20:41:17.675997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:43:03.495 [2024-10-01 20:41:17.676021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.495 [2024-10-01 20:41:17.676037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:43:03.495 [2024-10-01 20:41:17.676061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.495 [2024-10-01 20:41:17.676076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.495 [2024-10-01 20:41:17.676100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.495 [2024-10-01 20:41:17.676115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:03.495 [2024-10-01 20:41:17.676170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.495 [2024-10-01 20:41:17.676188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:43:03.495 [2024-10-01 20:41:17.676241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.495 [2024-10-01 20:41:17.676256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:43:03.495 [2024-10-01 20:41:17.676280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.495 [2024-10-01 20:41:17.676310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:43:03.495 [2024-10-01 20:41:17.676335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.495 [2024-10-01 20:41:17.676350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:43:03.495 [2024-10-01 20:41:17.676374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.495 [2024-10-01 20:41:17.676389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:43:03.495 [2024-10-01 20:41:17.676414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.495 [2024-10-01 20:41:17.676429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:43:03.495 [2024-10-01 20:41:17.676453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.495 [2024-10-01 20:41:17.676469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:43:03.495 [2024-10-01 20:41:17.676501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.495 [2024-10-01 20:41:17.676517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:43:03.495 [2024-10-01 20:41:17.676541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.495 [2024-10-01 20:41:17.676557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:43:03.495 [2024-10-01 20:41:17.676581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.495 [2024-10-01 20:41:17.676596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:43:03.495 [2024-10-01 20:41:17.676620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.495 [2024-10-01 20:41:17.676635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:43:03.495 [2024-10-01 20:41:17.676659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.495 [2024-10-01 20:41:17.676674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:43:03.495 [2024-10-01 20:41:17.676698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.495 [2024-10-01 20:41:17.676728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:43:03.495 [2024-10-01 20:41:17.676766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.495 [2024-10-01 20:41:17.676781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:43:03.495 [2024-10-01 20:41:17.676804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.495 [2024-10-01 20:41:17.676818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:43:03.495 [2024-10-01 20:41:17.676841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.495 [2024-10-01 20:41:17.676855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:43:03.495 [2024-10-01 20:41:17.676895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.496 [2024-10-01 20:41:17.676910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:43:03.496 [2024-10-01 20:41:17.676933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.496 [2024-10-01 20:41:17.676948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:43:03.496 [2024-10-01 20:41:17.676971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.496 [2024-10-01 20:41:17.676986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:43:03.496 [2024-10-01 20:41:17.677019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.496 [2024-10-01 20:41:17.677035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:43:03.496 [2024-10-01 20:41:17.677059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.496 [2024-10-01 20:41:17.677073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:43:03.496 [2024-10-01 20:41:17.677097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.496 [2024-10-01 20:41:17.677112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:43:03.496 [2024-10-01 20:41:17.677135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.496 [2024-10-01 20:41:17.677150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:43:03.496 [2024-10-01 20:41:17.677173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.496 [2024-10-01 20:41:17.677188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:43:03.496 [2024-10-01 20:41:17.677222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.496 [2024-10-01 20:41:17.677239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:43:03.496 [2024-10-01 20:41:17.677280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.496 [2024-10-01 20:41:17.677295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:43:03.496 [2024-10-01 20:41:17.677320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.496 [2024-10-01 20:41:17.677335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:43:03.496 [2024-10-01 20:41:17.677359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.496 [2024-10-01 20:41:17.677390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:43:03.496 [2024-10-01 20:41:17.677415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.496 [2024-10-01 20:41:17.677430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:43:03.496 [2024-10-01 20:41:17.677455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.496 [2024-10-01 20:41:17.677471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:43:03.496 [2024-10-01 20:41:17.677496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.496 [2024-10-01 20:41:17.677511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:43:03.496 [2024-10-01 20:41:17.677536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.496 [2024-10-01 20:41:17.677574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:03.496 [2024-10-01 20:41:17.677599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.496 [2024-10-01 20:41:17.677616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:43:03.496 [2024-10-01 20:41:17.677640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.496 [2024-10-01 20:41:17.677670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:43:03.496 [2024-10-01 20:41:17.677693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.496 [2024-10-01 20:41:17.677708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:43:03.496 [2024-10-01 20:41:17.677731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.496 [2024-10-01 20:41:17.677746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:43:03.496 [2024-10-01 20:41:17.677769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.496 [2024-10-01 20:41:17.677784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:43:03.496 [2024-10-01 20:41:17.677808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.496 [2024-10-01 20:41:17.677823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:43:03.496 [2024-10-01 20:41:17.677846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.496 [2024-10-01 20:41:17.677861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:43:03.496 [2024-10-01 20:41:17.677884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.496 [2024-10-01 20:41:17.677899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:43:03.496 [2024-10-01 20:41:17.677922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.496 [2024-10-01 20:41:17.677951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:43:03.496 [2024-10-01 20:41:17.677974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.496 [2024-10-01 20:41:17.677988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:43:03.496 [2024-10-01 20:41:17.678011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.496 [2024-10-01 20:41:17.678026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:43:03.496 [2024-10-01 20:41:17.678048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.496 [2024-10-01 20:41:17.678063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:43:03.496 [2024-10-01 20:41:17.678093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.496 [2024-10-01 20:41:17.678109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:43:03.496 [2024-10-01 20:41:17.678132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.496 [2024-10-01 20:41:17.678147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:43:03.496 [2024-10-01 20:41:17.678309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.496 [2024-10-01 20:41:17.678331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:43:03.497 9847.33 IOPS, 38.47 MiB/s 9300.81 IOPS, 36.33 MiB/s 9332.24 IOPS, 36.45 MiB/s 9364.33 IOPS, 36.58 MiB/s 9393.11 IOPS, 36.69 MiB/s 9422.20 IOPS, 36.81 MiB/s 9445.52 IOPS, 36.90 MiB/s [2024-10-01 20:41:24.734113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:85152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.497 [2024-10-01 20:41:24.734218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:43:03.497 [2024-10-01 20:41:24.734257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:85344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.497 [2024-10-01 20:41:24.734275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:43:03.497 [2024-10-01 20:41:24.734298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:85352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.497 [2024-10-01 20:41:24.734314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:43:03.497 [2024-10-01 20:41:24.734337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.497 [2024-10-01 20:41:24.734352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:43:03.497 [2024-10-01 20:41:24.734374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.497 [2024-10-01 20:41:24.734390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:43:03.497 [2024-10-01 20:41:24.734412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.497 [2024-10-01 20:41:24.734428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:43:03.497 [2024-10-01 20:41:24.734450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.497 [2024-10-01 20:41:24.734466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:43:03.497 [2024-10-01 20:41:24.734532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.497 [2024-10-01 20:41:24.734545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:43:03.497 [2024-10-01 20:41:24.734565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:85400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.497 [2024-10-01 20:41:24.734608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:43:03.497 [2024-10-01 20:41:24.734630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.497 [2024-10-01 20:41:24.734644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:43:03.497 [2024-10-01 20:41:24.734663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:85416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.497 [2024-10-01 20:41:24.734678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:43:03.497 [2024-10-01 20:41:24.734697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.497 [2024-10-01 20:41:24.734711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:43:03.497 [2024-10-01 20:41:24.734731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:85432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.497 [2024-10-01 20:41:24.734747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:43:03.497 [2024-10-01 20:41:24.734766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.497 [2024-10-01 20:41:24.734781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:43:03.497 [2024-10-01 20:41:24.734801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:85448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.497 [2024-10-01 20:41:24.734815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:43:03.497 [2024-10-01 20:41:24.734835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.497 [2024-10-01 20:41:24.734849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:43:03.497 [2024-10-01 20:41:24.734868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:85464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.497 [2024-10-01 20:41:24.734882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:43:03.497 [2024-10-01 20:41:24.734904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.497 [2024-10-01 20:41:24.734919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:43:03.497 [2024-10-01 20:41:24.734939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.497 [2024-10-01 20:41:24.734954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:43:03.497 [2024-10-01 20:41:24.734974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.497 [2024-10-01 20:41:24.734988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:43:03.497 [2024-10-01 20:41:24.735008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:85496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.497 [2024-10-01 20:41:24.735032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:43:03.497 [2024-10-01 20:41:24.735054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.497 [2024-10-01 20:41:24.735069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:03.497 [2024-10-01 20:41:24.735089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.497 [2024-10-01 20:41:24.735104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:43:03.497 [2024-10-01 20:41:24.735156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.497 [2024-10-01 20:41:24.735172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:43:03.497 [2024-10-01 20:41:24.735211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.497 [2024-10-01 20:41:24.735230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:43:03.497 [2024-10-01 20:41:24.735252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.497 [2024-10-01 20:41:24.735269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:43:03.497 [2024-10-01 20:41:24.735291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.497 [2024-10-01 20:41:24.735308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:43:03.497 [2024-10-01 20:41:24.735330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.497 [2024-10-01 20:41:24.735347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:43:03.497 [2024-10-01 20:41:24.735370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.497 [2024-10-01 20:41:24.735386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:43:03.497 [2024-10-01 20:41:24.735409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.497 [2024-10-01 20:41:24.735425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:43:03.497 [2024-10-01 20:41:24.735448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:85576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.498 [2024-10-01 20:41:24.735464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:43:03.498 [2024-10-01 20:41:24.735517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.498 [2024-10-01 20:41:24.735547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:43:03.498 [2024-10-01 20:41:24.738131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.498 [2024-10-01 20:41:24.738179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:43:03.498 [2024-10-01 20:41:24.738234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.498 [2024-10-01 20:41:24.738254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:43:03.498 [2024-10-01 20:41:24.738278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:85608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.498 [2024-10-01 20:41:24.738295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:43:03.498 [2024-10-01 20:41:24.738318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:85616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.498 [2024-10-01 20:41:24.738334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:43:03.498 [2024-10-01 20:41:24.738357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.498 [2024-10-01 20:41:24.738374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:43:03.498 [2024-10-01 20:41:24.738397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:85632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.498 [2024-10-01 20:41:24.738413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:43:03.498 [2024-10-01 20:41:24.738436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.498 [2024-10-01 20:41:24.738452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:43:03.498 [2024-10-01 20:41:24.738490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.498 [2024-10-01 20:41:24.738521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:43:03.498 [2024-10-01 20:41:24.738556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.498 [2024-10-01 20:41:24.738570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:43:03.498 [2024-10-01 20:41:24.738590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:85664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.498 [2024-10-01 20:41:24.738606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:43:03.498 [2024-10-01 20:41:24.738626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:85160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.498 [2024-10-01 20:41:24.738641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:43:03.498 [2024-10-01 20:41:24.738661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:85168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.498 [2024-10-01 20:41:24.738675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:43:03.498 [2024-10-01 20:41:24.738696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:85176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.498 [2024-10-01 20:41:24.738710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:43:03.498 [2024-10-01 20:41:24.738739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.498 [2024-10-01 20:41:24.738755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:43:03.498 [2024-10-01 20:41:24.738774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:85192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.498 [2024-10-01 20:41:24.738789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:43:03.498 [2024-10-01 20:41:24.738810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:85200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.498 [2024-10-01 20:41:24.738824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:43:03.498 [2024-10-01 20:41:24.738844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:85208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.498 [2024-10-01 20:41:24.738859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:43:03.498 [2024-10-01 20:41:24.738879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:85216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.498 [2024-10-01 20:41:24.738894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:43:03.498 [2024-10-01 20:41:24.738913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:85224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.498 [2024-10-01 20:41:24.738928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:43:03.498 [2024-10-01 20:41:24.738948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:85232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.498 [2024-10-01 20:41:24.738962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:43:03.498 [2024-10-01 20:41:24.738982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:85240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.498 [2024-10-01 20:41:24.738997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:43:03.498 [2024-10-01 20:41:24.739028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:85248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.498 [2024-10-01 20:41:24.739042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:03.498 [2024-10-01 20:41:24.739062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.498 [2024-10-01 20:41:24.739076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:43:03.498 [2024-10-01 20:41:24.739096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:85264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.498 [2024-10-01 20:41:24.739110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:43:03.498 [2024-10-01 20:41:24.739158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.498 [2024-10-01 20:41:24.739189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:43:03.498 [2024-10-01 20:41:24.739230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:85280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.498 [2024-10-01 20:41:24.739256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:43:03.498 [2024-10-01 20:41:24.739280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:85288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.498 [2024-10-01 20:41:24.739297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:43:03.498 [2024-10-01 20:41:24.739319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.498 [2024-10-01 20:41:24.739334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:43:03.498 [2024-10-01 20:41:24.739357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:85304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.498 [2024-10-01 20:41:24.739372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:43:03.499 [2024-10-01 20:41:24.739394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:85312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.499 [2024-10-01 20:41:24.739410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:43:03.499 [2024-10-01 20:41:24.739432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:85320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.499 [2024-10-01 20:41:24.739448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:43:03.499 [2024-10-01 20:41:24.739470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:85328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.499 [2024-10-01 20:41:24.739501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:43:03.499 [2024-10-01 20:41:24.740436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:85336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.499 [2024-10-01 20:41:24.740494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:43:03.499 [2024-10-01 20:41:24.740520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:85152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.499 [2024-10-01 20:41:24.740535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:43:03.499 [2024-10-01 20:41:24.740555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.499 [2024-10-01 20:41:24.740570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:43:03.499 [2024-10-01 20:41:24.740589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:85352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.499 [2024-10-01 20:41:24.740604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:43:03.499 [2024-10-01 20:41:24.740623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:85360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.499 [2024-10-01 20:41:24.740638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:43:03.499 [2024-10-01 20:41:24.740657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:85368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.499 [2024-10-01 20:41:24.740683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:43:03.499 [2024-10-01 20:41:24.740704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.499 [2024-10-01 20:41:24.740719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:03.499 [2024-10-01 20:41:24.740739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.499 [2024-10-01 20:41:24.740753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:43:03.499 [2024-10-01 20:41:24.740772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.499 [2024-10-01 20:41:24.740787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:43:03.499 [2024-10-01 20:41:24.740807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.499 [2024-10-01 20:41:24.740822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:43:03.499 [2024-10-01 20:41:24.740841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.499 [2024-10-01 20:41:24.740856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:43:03.499 [2024-10-01 20:41:24.740875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:85416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.499 [2024-10-01 20:41:24.740889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:43:03.499 [2024-10-01 20:41:24.740908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.499 [2024-10-01 20:41:24.740923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:43:03.499 [2024-10-01 20:41:24.740942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:85432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.499 [2024-10-01 20:41:24.740956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:43:03.499 [2024-10-01 20:41:24.740976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.499 [2024-10-01 20:41:24.740990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:43:03.499 [2024-10-01 20:41:24.741009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:85448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.499 [2024-10-01 20:41:24.741023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:43:03.499 [2024-10-01 20:41:24.741042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.499 [2024-10-01 20:41:24.741057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:43:03.499 [2024-10-01 20:41:24.741076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:85464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.499 [2024-10-01 20:41:24.741090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:43:03.499 [2024-10-01 20:41:24.741117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.499 [2024-10-01 20:41:24.741132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:43:03.500 [2024-10-01 20:41:24.741163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.500 [2024-10-01 20:41:24.741180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:43:03.500 [2024-10-01 20:41:24.741200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.500 [2024-10-01 20:41:24.741215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:43:03.500 [2024-10-01 20:41:24.741235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:85496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.500 [2024-10-01 20:41:24.741250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:03.500 [2024-10-01 20:41:24.741270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.500 [2024-10-01 20:41:24.741284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:43:03.500 [2024-10-01 20:41:24.741303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.500 [2024-10-01 20:41:24.741318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:43:03.500 [2024-10-01 20:41:24.741337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.500 [2024-10-01 20:41:24.741351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:43:03.500 [2024-10-01 20:41:24.741371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.500 [2024-10-01 20:41:24.741385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:43:03.500 [2024-10-01 20:41:24.741406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.500 [2024-10-01 20:41:24.741420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:43:03.500 [2024-10-01 20:41:24.741440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.500 [2024-10-01 20:41:24.741454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:43:03.500 [2024-10-01 20:41:24.741474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.500 [2024-10-01 20:41:24.741488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:43:03.500 [2024-10-01 20:41:24.741507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.500 [2024-10-01 20:41:24.741522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:43:03.500 [2024-10-01 20:41:24.741549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.500 [2024-10-01 20:41:24.741564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:43:03.500 [2024-10-01 20:41:24.741583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:85576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.500 [2024-10-01 20:41:24.741598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:43:03.500 [2024-10-01 20:41:24.741927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.500 [2024-10-01 20:41:24.741951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:43:03.500 [2024-10-01 20:41:24.741974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:85672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.500 [2024-10-01 20:41:24.741991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:43:03.500 [2024-10-01 20:41:24.742012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:85680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.500 [2024-10-01 20:41:24.742027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:43:03.500 [2024-10-01 20:41:24.742047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.500 [2024-10-01 20:41:24.742061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:43:03.500 [2024-10-01 20:41:24.742080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:85696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.500 [2024-10-01 20:41:24.742096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:03.500 [2024-10-01 20:41:24.742128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:85704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.500 [2024-10-01 20:41:24.742145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:43:03.500 [2024-10-01 20:41:24.742164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:85712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.500 [2024-10-01 20:41:24.742179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:43:03.500 [2024-10-01 20:41:24.742198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.500 [2024-10-01 20:41:24.742213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:43:03.500 [2024-10-01 20:41:24.742232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:85728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.500 [2024-10-01 20:41:24.742247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:43:03.500 [2024-10-01 20:41:24.742266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.500 [2024-10-01 20:41:24.742280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:43:03.500 [2024-10-01 20:41:24.742299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:85744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.500 [2024-10-01 20:41:24.742323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:43:03.500 [2024-10-01 20:41:24.742344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.500 [2024-10-01 20:41:24.742359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:43:03.500 [2024-10-01 20:41:24.742378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:85760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.500 [2024-10-01 20:41:24.742392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:43:03.500 [2024-10-01 20:41:24.742412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:85768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.500 [2024-10-01 20:41:24.742426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:43:03.500 [2024-10-01 20:41:24.742446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:85776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.500 [2024-10-01 20:41:24.742460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:43:03.500 [2024-10-01 20:41:24.742479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.500 [2024-10-01 20:41:24.742494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:43:03.500 [2024-10-01 20:41:24.742513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:85792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.500 [2024-10-01 20:41:24.742528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:43:03.500 [2024-10-01 20:41:24.742547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:85800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.500 [2024-10-01 20:41:24.742562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:43:03.500 [2024-10-01 20:41:24.742582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.501 [2024-10-01 20:41:24.742597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:43:03.501 [2024-10-01 20:41:24.742616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:85816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.501 [2024-10-01 20:41:24.742631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:43:03.501 [2024-10-01 20:41:24.742651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:85824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.501 [2024-10-01 20:41:24.742671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.501 [2024-10-01 20:41:24.742692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:85832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.501 [2024-10-01 20:41:24.742706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:03.501 [2024-10-01 20:41:24.742742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:85840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.501 [2024-10-01 20:41:24.742763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:43:03.501 [2024-10-01 20:41:24.742784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:85848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.501 [2024-10-01 20:41:24.742800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:43:03.501 [2024-10-01 20:41:24.742820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.501 [2024-10-01 20:41:24.742835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:43:03.501 [2024-10-01 20:41:24.742855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.501 [2024-10-01 20:41:24.742869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:43:03.501 [2024-10-01 20:41:24.742889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:85872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.501 [2024-10-01 20:41:24.742904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:43:03.501 [2024-10-01 20:41:24.742929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:85880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.501 [2024-10-01 20:41:24.742945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:43:03.501 [2024-10-01 20:41:24.742965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:85888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.501 [2024-10-01 20:41:24.742981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:43:03.501 [2024-10-01 20:41:24.743000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:85896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.501 [2024-10-01 20:41:24.743015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:43:03.501 [2024-10-01 20:41:24.743035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:85904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.501 [2024-10-01 20:41:24.743049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:43:03.501 [2024-10-01 20:41:24.743069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.501 [2024-10-01 20:41:24.743099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:43:03.501 [2024-10-01 20:41:24.743119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:85920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.501 [2024-10-01 20:41:24.743133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:43:03.501 [2024-10-01 20:41:24.743167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:85928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.501 [2024-10-01 20:41:24.743183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:43:03.501 [2024-10-01 20:41:24.743202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.501 [2024-10-01 20:41:24.743217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:43:03.501 [2024-10-01 20:41:24.743244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.501 [2024-10-01 20:41:24.743259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:43:03.501 [2024-10-01 20:41:24.743278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:85952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.501 [2024-10-01 20:41:24.743295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:43:03.501 [2024-10-01 20:41:24.743315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:85960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.501 [2024-10-01 20:41:24.743330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:43:03.501 [2024-10-01 20:41:24.743349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:85968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.501 [2024-10-01 20:41:24.743364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:43:03.501 [2024-10-01 20:41:24.743383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.501 [2024-10-01 20:41:24.743398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:43:03.501 [2024-10-01 20:41:24.743417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:85984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.501 [2024-10-01 20:41:24.743432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:43:03.501 [2024-10-01 20:41:24.743451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:85992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.501 [2024-10-01 20:41:24.743465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:43:03.501 [2024-10-01 20:41:24.743485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:86000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.501 [2024-10-01 20:41:24.743499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:43:03.501 [2024-10-01 20:41:24.743521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:86008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.501 [2024-10-01 20:41:24.743536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:43:03.501 [2024-10-01 20:41:24.743555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:86016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.501 [2024-10-01 20:41:24.743569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:43:03.501 [2024-10-01 20:41:24.743589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:86024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.501 [2024-10-01 20:41:24.743603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:43:03.501 [2024-10-01 20:41:24.743623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.501 [2024-10-01 20:41:24.743637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:43:03.501 [2024-10-01 20:41:24.743663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:86040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.502 [2024-10-01 20:41:24.743678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:43:03.502 [2024-10-01 20:41:24.743698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:86048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.502 [2024-10-01 20:41:24.743712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:43:03.502 [2024-10-01 20:41:24.743732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.502 [2024-10-01 20:41:24.743746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:43:03.502 [2024-10-01 20:41:24.743766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:86064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.502 [2024-10-01 20:41:24.743781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:43:03.502 [2024-10-01 20:41:24.743800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:86072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.502 [2024-10-01 20:41:24.743814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:43:03.502 [2024-10-01 20:41:24.743858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:86080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.502 [2024-10-01 20:41:24.743877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:43:03.502 [2024-10-01 20:41:24.743898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:86088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.502 [2024-10-01 20:41:24.743913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:03.502 [2024-10-01 20:41:24.743934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.502 [2024-10-01 20:41:24.743949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:43:03.502 [2024-10-01 20:41:24.743969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.502 [2024-10-01 20:41:24.743984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:43:03.502 [2024-10-01 20:41:24.744005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.502 [2024-10-01 20:41:24.744020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:43:03.502 [2024-10-01 20:41:24.744040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:86120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.502 [2024-10-01 20:41:24.744055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:43:03.502 [2024-10-01 20:41:24.744076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:86128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.502 [2024-10-01 20:41:24.744091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:43:03.502 [2024-10-01 20:41:24.744113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:86136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.502 [2024-10-01 20:41:24.744166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:43:03.502 [2024-10-01 20:41:24.744189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:86144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.502 [2024-10-01 20:41:24.744219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:43:03.502 [2024-10-01 20:41:24.744239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.502 [2024-10-01 20:41:24.744253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:43:03.502 [2024-10-01 20:41:24.744272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:86160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.502 [2024-10-01 20:41:24.744286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:43:03.502 [2024-10-01 20:41:24.744306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:86168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.502 [2024-10-01 20:41:24.744321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:43:03.502 [2024-10-01 20:41:24.745004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.502 [2024-10-01 20:41:24.745030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:43:03.502 [2024-10-01 20:41:24.745054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.502 [2024-10-01 20:41:24.745071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:43:03.502 [2024-10-01 20:41:24.745091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:85608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.502 [2024-10-01 20:41:24.745105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:43:03.502 [2024-10-01 20:41:24.745125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:85616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.502 [2024-10-01 20:41:24.745153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:43:03.502 [2024-10-01 20:41:24.745177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.502 [2024-10-01 20:41:24.745193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:43:03.502 [2024-10-01 20:41:24.745213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:85632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.502 [2024-10-01 20:41:24.745227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:43:03.502 [2024-10-01 20:41:24.745246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.502 [2024-10-01 20:41:24.745260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:43:03.502 [2024-10-01 20:41:24.745280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:85648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.502 [2024-10-01 20:41:24.745307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:43:03.502 [2024-10-01 20:41:24.745329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.502 [2024-10-01 20:41:24.745344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:43:03.502 [2024-10-01 20:41:24.745364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.502 [2024-10-01 20:41:24.745379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:43:03.502 [2024-10-01 20:41:24.745398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:85160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.502 [2024-10-01 20:41:24.745412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:43:03.502 [2024-10-01 20:41:24.745486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:85168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.502 [2024-10-01 20:41:24.745504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:43:03.502 [2024-10-01 20:41:24.745527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:85176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.502 [2024-10-01 20:41:24.745544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:43:03.502 [2024-10-01 20:41:24.745566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:85184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.502 [2024-10-01 20:41:24.745583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:43:03.502 [2024-10-01 20:41:24.745606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:85192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.502 [2024-10-01 20:41:24.745622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:43:03.503 [2024-10-01 20:41:24.745645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:85200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.503 [2024-10-01 20:41:24.745662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:43:03.503 [2024-10-01 20:41:24.745685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:85208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.503 [2024-10-01 20:41:24.745701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:43:03.503 [2024-10-01 20:41:24.745724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:85216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.503 [2024-10-01 20:41:24.745740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:43:03.503 [2024-10-01 20:41:24.745763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:85224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.503 [2024-10-01 20:41:24.745779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:43:03.503 [2024-10-01 20:41:24.745802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:85232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.503 [2024-10-01 20:41:24.745834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:43:03.503 [2024-10-01 20:41:24.745878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:85240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.503 [2024-10-01 20:41:24.745918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:43:03.503 [2024-10-01 20:41:24.745948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:85248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.503 [2024-10-01 20:41:24.745962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:03.503 [2024-10-01 20:41:24.745981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:85256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.503 [2024-10-01 20:41:24.745995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:43:03.503 [2024-10-01 20:41:24.746014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:85264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.503 [2024-10-01 20:41:24.746027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:43:03.503 [2024-10-01 20:41:24.746046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.503 [2024-10-01 20:41:24.746060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:43:03.503 [2024-10-01 20:41:24.746079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:85280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.503 [2024-10-01 20:41:24.746093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:43:03.503 [2024-10-01 20:41:24.746112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:85288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.503 [2024-10-01 20:41:24.746155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:43:03.503 [2024-10-01 20:41:24.746180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:85296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.503 [2024-10-01 20:41:24.746198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:43:03.503 [2024-10-01 20:41:24.746220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:85304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.503 [2024-10-01 20:41:24.746237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:43:03.503 [2024-10-01 20:41:24.746273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:85312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.503 [2024-10-01 20:41:24.746291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:43:03.503 [2024-10-01 20:41:24.746314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:85320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.503 [2024-10-01 20:41:24.746330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:43:03.503 [2024-10-01 20:41:24.746353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:85328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.503 [2024-10-01 20:41:24.746369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:43:03.503 [2024-10-01 20:41:24.746400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:85336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.503 [2024-10-01 20:41:24.746418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:43:03.503 [2024-10-01 20:41:24.746440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:85152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.503 [2024-10-01 20:41:24.746472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:43:03.503 [2024-10-01 20:41:24.746522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.503 [2024-10-01 20:41:24.746536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:43:03.503 [2024-10-01 20:41:24.746555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.503 [2024-10-01 20:41:24.746569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:43:03.503 [2024-10-01 20:41:24.746588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.503 [2024-10-01 20:41:24.746602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:43:03.503 [2024-10-01 20:41:24.746620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:85368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.503 [2024-10-01 20:41:24.746634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:43:03.503 [2024-10-01 20:41:24.746653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:85376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.503 [2024-10-01 20:41:24.746667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:03.503 [2024-10-01 20:41:24.746686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.503 [2024-10-01 20:41:24.746700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:43:03.503 [2024-10-01 20:41:24.746719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.503 [2024-10-01 20:41:24.746733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:43:03.503 [2024-10-01 20:41:24.746752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:85400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.503 [2024-10-01 20:41:24.746766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:43:03.503 [2024-10-01 20:41:24.746784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.503 [2024-10-01 20:41:24.746798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:43:03.503 [2024-10-01 20:41:24.746819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:85416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.503 [2024-10-01 20:41:24.746834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:43:03.503 [2024-10-01 20:41:24.746859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.503 [2024-10-01 20:41:24.746874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:43:03.503 [2024-10-01 20:41:24.746893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:85432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.503 [2024-10-01 20:41:24.746907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:43:03.503 [2024-10-01 20:41:24.746926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.503 [2024-10-01 20:41:24.746940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:43:03.504 [2024-10-01 20:41:24.746959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:85448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.504 [2024-10-01 20:41:24.746973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:43:03.504 [2024-10-01 20:41:24.746992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.504 [2024-10-01 20:41:24.747006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:43:03.504 [2024-10-01 20:41:24.747024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:85464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.504 [2024-10-01 20:41:24.747038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:43:03.504 [2024-10-01 20:41:24.747057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.504 [2024-10-01 20:41:24.747071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:43:03.504 [2024-10-01 20:41:24.747090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.504 [2024-10-01 20:41:24.747104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:43:03.504 [2024-10-01 20:41:24.747156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.504 [2024-10-01 20:41:24.747176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:43:03.504 [2024-10-01 20:41:24.747232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.504 [2024-10-01 20:41:24.747251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:03.504 [2024-10-01 20:41:24.747273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.504 [2024-10-01 20:41:24.747290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:43:03.504 [2024-10-01 20:41:24.747312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.504 [2024-10-01 20:41:24.759006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:43:03.504 [2024-10-01 20:41:24.759055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.504 [2024-10-01 20:41:24.759086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:43:03.504 [2024-10-01 20:41:24.759138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.504 [2024-10-01 20:41:24.759173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:43:03.504 [2024-10-01 20:41:24.759195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.504 [2024-10-01 20:41:24.759211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:43:03.504 [2024-10-01 20:41:24.759233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.504 [2024-10-01 20:41:24.759249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:43:03.504 [2024-10-01 20:41:24.759270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.504 [2024-10-01 20:41:24.759285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:43:03.504 [2024-10-01 20:41:24.759307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.504 [2024-10-01 20:41:24.759322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:43:03.504 [2024-10-01 20:41:24.759344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.504 [2024-10-01 20:41:24.759360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:43:03.504 [2024-10-01 20:41:24.760120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:85576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.504 [2024-10-01 20:41:24.760166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:43:03.504 [2024-10-01 20:41:24.760197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.504 [2024-10-01 20:41:24.760231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:43:03.504 [2024-10-01 20:41:24.760252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.504 [2024-10-01 20:41:24.760269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:43:03.504 [2024-10-01 20:41:24.760290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:85680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.504 [2024-10-01 20:41:24.760305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:43:03.504 [2024-10-01 20:41:24.760326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.504 [2024-10-01 20:41:24.760342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:43:03.504 [2024-10-01 20:41:24.760381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:85696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.504 [2024-10-01 20:41:24.760402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:03.504 [2024-10-01 20:41:24.760472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:85704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.504 [2024-10-01 20:41:24.760495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:43:03.504 [2024-10-01 20:41:24.760523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:85712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.504 [2024-10-01 20:41:24.760543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:43:03.504 [2024-10-01 20:41:24.760572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:85720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.504 [2024-10-01 20:41:24.760592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:43:03.504 [2024-10-01 20:41:24.760631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:85728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.504 [2024-10-01 20:41:24.760652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:43:03.504 [2024-10-01 20:41:24.760680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:85736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.504 [2024-10-01 20:41:24.760700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:43:03.504 [2024-10-01 20:41:24.760728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:85744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.504 [2024-10-01 20:41:24.760749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:43:03.504 [2024-10-01 20:41:24.760777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.504 [2024-10-01 20:41:24.760797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:43:03.505 [2024-10-01 20:41:24.760836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:85760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.505 [2024-10-01 20:41:24.760868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:43:03.505 [2024-10-01 20:41:24.760907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:85768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.505 [2024-10-01 20:41:24.760928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:43:03.505 [2024-10-01 20:41:24.760956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:85776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.505 [2024-10-01 20:41:24.760976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:43:03.505 [2024-10-01 20:41:24.761005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.505 [2024-10-01 20:41:24.761025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:43:03.505 [2024-10-01 20:41:24.761054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.505 [2024-10-01 20:41:24.761074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:43:03.505 [2024-10-01 20:41:24.761112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:85800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.505 [2024-10-01 20:41:24.761168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:43:03.505 [2024-10-01 20:41:24.761199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:85808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.505 [2024-10-01 20:41:24.761220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:43:03.505 [2024-10-01 20:41:24.761249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:85816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.505 [2024-10-01 20:41:24.761279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:43:03.505 [2024-10-01 20:41:24.761307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.505 [2024-10-01 20:41:24.761328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.505 [2024-10-01 20:41:24.761356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:85832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.505 [2024-10-01 20:41:24.761377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:03.505 [2024-10-01 20:41:24.761406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:85840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.505 [2024-10-01 20:41:24.761426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:43:03.505 [2024-10-01 20:41:24.761455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.505 [2024-10-01 20:41:24.761476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:43:03.505 [2024-10-01 20:41:24.761522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:85856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.505 [2024-10-01 20:41:24.761547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:43:03.505 [2024-10-01 20:41:24.761576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:85864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.505 [2024-10-01 20:41:24.761600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:43:03.505 [2024-10-01 20:41:24.761629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:85872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.505 [2024-10-01 20:41:24.761650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:43:03.505 [2024-10-01 20:41:24.761678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.505 [2024-10-01 20:41:24.761699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:43:03.505 [2024-10-01 20:41:24.761727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.505 [2024-10-01 20:41:24.761748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:43:03.505 [2024-10-01 20:41:24.761777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:85896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.505 [2024-10-01 20:41:24.761807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:43:03.505 [2024-10-01 20:41:24.761837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:85904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.505 [2024-10-01 20:41:24.761858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:43:03.505 [2024-10-01 20:41:24.761886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.505 [2024-10-01 20:41:24.761907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:43:03.505 [2024-10-01 20:41:24.761936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:85920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.505 [2024-10-01 20:41:24.761957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:43:03.505 [2024-10-01 20:41:24.761986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:85928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.505 [2024-10-01 20:41:24.762007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:43:03.505 [2024-10-01 20:41:24.762035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:85936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.505 [2024-10-01 20:41:24.762056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:43:03.505 [2024-10-01 20:41:24.762085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:85944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.505 [2024-10-01 20:41:24.762106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:43:03.505 [2024-10-01 20:41:24.762163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:85952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.505 [2024-10-01 20:41:24.762185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:43:03.505 [2024-10-01 20:41:24.762214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.505 [2024-10-01 20:41:24.762234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:43:03.505 [2024-10-01 20:41:24.762263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.505 [2024-10-01 20:41:24.762284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:43:03.505 [2024-10-01 20:41:24.762312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:85976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.505 [2024-10-01 20:41:24.762343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:43:03.505 [2024-10-01 20:41:24.762372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:85984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.506 [2024-10-01 20:41:24.762392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:43:03.506 [2024-10-01 20:41:24.762421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:85992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.506 [2024-10-01 20:41:24.762450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:43:03.506 [2024-10-01 20:41:24.762480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:86000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.506 [2024-10-01 20:41:24.762502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:43:03.506 [2024-10-01 20:41:24.762536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:86008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.506 [2024-10-01 20:41:24.762561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:43:03.506 [2024-10-01 20:41:24.762599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:86016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.506 [2024-10-01 20:41:24.762619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:43:03.506 [2024-10-01 20:41:24.762650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.506 [2024-10-01 20:41:24.762671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:43:03.506 [2024-10-01 20:41:24.762700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:86032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.506 [2024-10-01 20:41:24.762720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:43:03.506 [2024-10-01 20:41:24.762749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:86040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.506 [2024-10-01 20:41:24.762770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:43:03.506 [2024-10-01 20:41:24.762799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:86048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.506 [2024-10-01 20:41:24.762820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:43:03.506 [2024-10-01 20:41:24.762848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:86056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.506 [2024-10-01 20:41:24.762869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:43:03.506 [2024-10-01 20:41:24.762906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:86064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.506 [2024-10-01 20:41:24.762927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:43:03.506 [2024-10-01 20:41:24.762964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:86072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.506 [2024-10-01 20:41:24.762985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:43:03.506 [2024-10-01 20:41:24.763013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:86080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.506 [2024-10-01 20:41:24.763034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:43:03.506 [2024-10-01 20:41:24.763062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:86088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.506 [2024-10-01 20:41:24.763083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:03.506 [2024-10-01 20:41:24.763144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:86096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.506 [2024-10-01 20:41:24.763167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:43:03.506 [2024-10-01 20:41:24.763195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.506 [2024-10-01 20:41:24.763216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:43:03.506 [2024-10-01 20:41:24.763245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:86112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.506 [2024-10-01 20:41:24.763266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:43:03.506 [2024-10-01 20:41:24.763294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.506 [2024-10-01 20:41:24.763315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:43:03.506 [2024-10-01 20:41:24.763344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:86128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.506 [2024-10-01 20:41:24.763364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:43:03.506 [2024-10-01 20:41:24.763393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.506 [2024-10-01 20:41:24.763413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:43:03.506 [2024-10-01 20:41:24.763442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:86144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.506 [2024-10-01 20:41:24.763463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:43:03.506 [2024-10-01 20:41:24.763491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:86152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.506 [2024-10-01 20:41:24.763511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:43:03.506 [2024-10-01 20:41:24.763541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:86160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.506 [2024-10-01 20:41:24.763562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:43:03.506 [2024-10-01 20:41:24.764552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:86168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.506 [2024-10-01 20:41:24.764589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:43:03.506 [2024-10-01 20:41:24.764625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.506 [2024-10-01 20:41:24.764648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:43:03.506 [2024-10-01 20:41:24.764677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.506 [2024-10-01 20:41:24.764700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:43:03.506 [2024-10-01 20:41:24.764743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:85608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.506 [2024-10-01 20:41:24.764765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:43:03.506 [2024-10-01 20:41:24.764794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:85616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.506 [2024-10-01 20:41:24.764816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:43:03.506 [2024-10-01 20:41:24.764845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.506 [2024-10-01 20:41:24.764866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:43:03.506 [2024-10-01 20:41:24.764894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.506 [2024-10-01 20:41:24.764914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:43:03.506 [2024-10-01 20:41:24.764943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.506 [2024-10-01 20:41:24.764964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:43:03.506 [2024-10-01 20:41:24.764993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.506 [2024-10-01 20:41:24.765013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:43:03.506 [2024-10-01 20:41:24.765041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.507 [2024-10-01 20:41:24.765062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:43:03.507 [2024-10-01 20:41:24.765091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:85664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.507 [2024-10-01 20:41:24.765128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:43:03.507 [2024-10-01 20:41:24.765160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:85160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.507 [2024-10-01 20:41:24.765182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:43:03.507 [2024-10-01 20:41:24.765210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:85168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.507 [2024-10-01 20:41:24.765231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:43:03.507 [2024-10-01 20:41:24.765260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:85176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.507 [2024-10-01 20:41:24.765280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:43:03.507 [2024-10-01 20:41:24.765308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:85184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.507 [2024-10-01 20:41:24.765329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:43:03.507 [2024-10-01 20:41:24.765357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:85192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.507 [2024-10-01 20:41:24.765387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:43:03.507 [2024-10-01 20:41:24.765417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:85200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.507 [2024-10-01 20:41:24.765438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:43:03.507 [2024-10-01 20:41:24.765467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:85208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.507 [2024-10-01 20:41:24.765487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:43:03.507 [2024-10-01 20:41:24.765516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:85216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.507 [2024-10-01 20:41:24.765537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:43:03.507 [2024-10-01 20:41:24.765565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:85224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.507 [2024-10-01 20:41:24.765585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:43:03.507 [2024-10-01 20:41:24.765614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:85232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.507 [2024-10-01 20:41:24.765635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:43:03.507 [2024-10-01 20:41:24.765664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:85240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.507 [2024-10-01 20:41:24.765684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:43:03.507 [2024-10-01 20:41:24.765713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:85248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.507 [2024-10-01 20:41:24.765734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:03.507 [2024-10-01 20:41:24.765763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:85256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.507 [2024-10-01 20:41:24.765784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:43:03.507 [2024-10-01 20:41:24.765812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:85264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.507 [2024-10-01 20:41:24.765833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:43:03.507 [2024-10-01 20:41:24.765861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:85272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.507 [2024-10-01 20:41:24.765882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:43:03.507 [2024-10-01 20:41:24.765910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:85280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.507 [2024-10-01 20:41:24.765931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:43:03.507 [2024-10-01 20:41:24.765959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:85288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.507 [2024-10-01 20:41:24.765988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:43:03.507 [2024-10-01 20:41:24.766018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:85296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.507 [2024-10-01 20:41:24.766039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:43:03.507 [2024-10-01 20:41:24.766068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:85304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.507 [2024-10-01 20:41:24.766089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:43:03.507 [2024-10-01 20:41:24.766143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:85312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.507 [2024-10-01 20:41:24.766167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:43:03.507 [2024-10-01 20:41:24.766196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:85320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.507 [2024-10-01 20:41:24.766217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:43:03.507 [2024-10-01 20:41:24.766245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:85328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.507 [2024-10-01 20:41:24.766266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:43:03.507 [2024-10-01 20:41:24.766294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:85336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.507 [2024-10-01 20:41:24.766315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:43:03.507 [2024-10-01 20:41:24.766343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:85152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.507 [2024-10-01 20:41:24.766364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:43:03.507 [2024-10-01 20:41:24.766392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:85344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.507 [2024-10-01 20:41:24.766413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:43:03.507 [2024-10-01 20:41:24.766442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:85352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.507 [2024-10-01 20:41:24.766463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:43:03.507 [2024-10-01 20:41:24.766491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:85360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.507 [2024-10-01 20:41:24.766519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:43:03.507 [2024-10-01 20:41:24.766548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.507 [2024-10-01 20:41:24.766569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:43:03.507 [2024-10-01 20:41:24.766598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.507 [2024-10-01 20:41:24.766627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:03.507 [2024-10-01 20:41:24.766658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.507 [2024-10-01 20:41:24.766679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:43:03.507 [2024-10-01 20:41:24.766707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.507 [2024-10-01 20:41:24.766728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:43:03.507 [2024-10-01 20:41:24.766757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:85400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.507 [2024-10-01 20:41:24.766778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:43:03.507 [2024-10-01 20:41:24.766806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.507 [2024-10-01 20:41:24.766827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:43:03.508 [2024-10-01 20:41:24.766856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.508 [2024-10-01 20:41:24.766876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:43:03.508 [2024-10-01 20:41:24.766905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.508 [2024-10-01 20:41:24.766925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:43:03.508 [2024-10-01 20:41:24.766954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:85432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.508 [2024-10-01 20:41:24.766975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:43:03.508 [2024-10-01 20:41:24.767004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.508 [2024-10-01 20:41:24.767024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:43:03.508 [2024-10-01 20:41:24.767053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:85448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.508 [2024-10-01 20:41:24.767074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:43:03.508 [2024-10-01 20:41:24.767103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.508 [2024-10-01 20:41:24.767139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:43:03.508 [2024-10-01 20:41:24.767169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:85464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.508 [2024-10-01 20:41:24.767190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:43:03.508 [2024-10-01 20:41:24.767219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.508 [2024-10-01 20:41:24.767240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:43:03.508 [2024-10-01 20:41:24.767282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.508 [2024-10-01 20:41:24.767304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:43:03.508 [2024-10-01 20:41:24.767333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.508 [2024-10-01 20:41:24.767354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:43:03.508 [2024-10-01 20:41:24.767383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:85496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.508 [2024-10-01 20:41:24.767404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:03.508 [2024-10-01 20:41:24.767433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.508 [2024-10-01 20:41:24.767454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:43:03.508 [2024-10-01 20:41:24.767482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.508 [2024-10-01 20:41:24.767503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:43:03.508 [2024-10-01 20:41:24.767532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.508 [2024-10-01 20:41:24.767553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:43:03.508 [2024-10-01 20:41:24.767581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.508 [2024-10-01 20:41:24.767603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:43:03.508 [2024-10-01 20:41:24.767632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.508 [2024-10-01 20:41:24.767653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:43:03.508 [2024-10-01 20:41:24.767681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.508 [2024-10-01 20:41:24.767702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:43:03.508 [2024-10-01 20:41:24.767730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.508 [2024-10-01 20:41:24.767751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:43:03.508 [2024-10-01 20:41:24.767781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.508 [2024-10-01 20:41:24.767802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:43:03.508 [2024-10-01 20:41:24.768695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.508 [2024-10-01 20:41:24.768731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:43:03.508 [2024-10-01 20:41:24.768780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:85576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.508 [2024-10-01 20:41:24.768805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:43:03.508 [2024-10-01 20:41:24.768834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.508 [2024-10-01 20:41:24.768855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:43:03.508 [2024-10-01 20:41:24.768884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:85672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.508 [2024-10-01 20:41:24.768905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:43:03.508 [2024-10-01 20:41:24.768934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:85680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.508 [2024-10-01 20:41:24.768955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:43:03.508 [2024-10-01 20:41:24.768984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.508 [2024-10-01 20:41:24.769004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:43:03.508 [2024-10-01 20:41:24.769033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:85696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.508 [2024-10-01 20:41:24.769061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:03.508 [2024-10-01 20:41:24.769097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.508 [2024-10-01 20:41:24.769146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:43:03.508 [2024-10-01 20:41:24.769177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:85712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.508 [2024-10-01 20:41:24.769198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:43:03.508 [2024-10-01 20:41:24.769226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.508 [2024-10-01 20:41:24.769247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:43:03.508 [2024-10-01 20:41:24.769276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:85728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.508 [2024-10-01 20:41:24.769304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:43:03.508 [2024-10-01 20:41:24.769333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:85736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.508 [2024-10-01 20:41:24.769353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:43:03.508 [2024-10-01 20:41:24.769381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:85744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.508 [2024-10-01 20:41:24.769406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:43:03.508 [2024-10-01 20:41:24.769434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.509 [2024-10-01 20:41:24.769470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:43:03.509 [2024-10-01 20:41:24.769502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:85760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.509 [2024-10-01 20:41:24.769523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:43:03.509 [2024-10-01 20:41:24.769552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:85768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.509 [2024-10-01 20:41:24.769572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:43:03.509 [2024-10-01 20:41:24.769612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:85776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.509 [2024-10-01 20:41:24.769632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:43:03.509 [2024-10-01 20:41:24.769661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.509 [2024-10-01 20:41:24.769681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:43:03.509 [2024-10-01 20:41:24.769721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.509 [2024-10-01 20:41:24.769741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:43:03.509 [2024-10-01 20:41:24.769770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:85800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.509 [2024-10-01 20:41:24.769790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:43:03.509 [2024-10-01 20:41:24.769818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:85808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.509 [2024-10-01 20:41:24.769839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:43:03.509 [2024-10-01 20:41:24.769867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:85816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.509 [2024-10-01 20:41:24.769887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:43:03.509 [2024-10-01 20:41:24.769915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:85824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.509 [2024-10-01 20:41:24.769939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.509 [2024-10-01 20:41:24.769968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:85832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.509 [2024-10-01 20:41:24.769995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:03.509 [2024-10-01 20:41:24.770024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.509 [2024-10-01 20:41:24.770045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:43:03.509 [2024-10-01 20:41:24.770073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.509 [2024-10-01 20:41:24.770102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:43:03.509 [2024-10-01 20:41:24.770160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:85856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.509 [2024-10-01 20:41:24.770191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:43:03.509 [2024-10-01 20:41:24.770219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:85864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.509 [2024-10-01 20:41:24.770240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:43:03.509 [2024-10-01 20:41:24.770269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:85872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.509 [2024-10-01 20:41:24.770290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:43:03.509 [2024-10-01 20:41:24.770319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:85880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.509 [2024-10-01 20:41:24.770339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:43:03.509 [2024-10-01 20:41:24.770368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:85888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.509 [2024-10-01 20:41:24.770388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:43:03.509 [2024-10-01 20:41:24.770417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.509 [2024-10-01 20:41:24.770437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:43:03.509 [2024-10-01 20:41:24.770466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:85904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.509 [2024-10-01 20:41:24.770487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:43:03.509 [2024-10-01 20:41:24.770516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:85912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.509 [2024-10-01 20:41:24.770536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:43:03.509 [2024-10-01 20:41:24.770564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.509 [2024-10-01 20:41:24.770585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:43:03.509 [2024-10-01 20:41:24.770622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.509 [2024-10-01 20:41:24.770643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:43:03.509 [2024-10-01 20:41:24.770671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:85936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.509 [2024-10-01 20:41:24.770691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:43:03.509 [2024-10-01 20:41:24.770719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:85944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.509 [2024-10-01 20:41:24.770740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:43:03.509 [2024-10-01 20:41:24.770778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:85952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.509 [2024-10-01 20:41:24.770801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:43:03.509 [2024-10-01 20:41:24.770829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.509 [2024-10-01 20:41:24.770850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:43:03.509 [2024-10-01 20:41:24.770878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:85968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.509 [2024-10-01 20:41:24.770899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:43:03.509 [2024-10-01 20:41:24.770936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:85976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.509 [2024-10-01 20:41:24.770957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:43:03.509 [2024-10-01 20:41:24.770985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.509 [2024-10-01 20:41:24.771005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:43:03.509 [2024-10-01 20:41:24.771034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:85992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.509 [2024-10-01 20:41:24.771054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:43:03.509 [2024-10-01 20:41:24.771083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:86000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.509 [2024-10-01 20:41:24.771103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:43:03.509 [2024-10-01 20:41:24.771158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:86008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.509 [2024-10-01 20:41:24.771180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:43:03.509 [2024-10-01 20:41:24.771208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.509 [2024-10-01 20:41:24.771229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:43:03.509 [2024-10-01 20:41:24.771257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:86024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.509 [2024-10-01 20:41:24.771278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:43:03.509 [2024-10-01 20:41:24.771306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:86032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.509 [2024-10-01 20:41:24.771327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:43:03.509 [2024-10-01 20:41:24.771355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.509 [2024-10-01 20:41:24.771375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:43:03.510 [2024-10-01 20:41:24.771412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:86048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.510 [2024-10-01 20:41:24.771434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:43:03.510 [2024-10-01 20:41:24.771463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:86056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.510 [2024-10-01 20:41:24.771483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:43:03.510 [2024-10-01 20:41:24.771512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:86064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.510 [2024-10-01 20:41:24.771532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:43:03.510 [2024-10-01 20:41:24.771560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:86072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.510 [2024-10-01 20:41:24.771582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:43:03.510 [2024-10-01 20:41:24.771610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.510 [2024-10-01 20:41:24.771631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:43:03.510 [2024-10-01 20:41:24.771659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.510 [2024-10-01 20:41:24.771680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:03.510 [2024-10-01 20:41:24.771708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.510 [2024-10-01 20:41:24.771729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:43:03.510 [2024-10-01 20:41:24.771757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:86104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.510 [2024-10-01 20:41:24.771778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:43:03.510 [2024-10-01 20:41:24.771806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:86112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.510 [2024-10-01 20:41:24.771852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:43:03.510 [2024-10-01 20:41:24.771884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:86120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.510 [2024-10-01 20:41:24.771905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:43:03.510 [2024-10-01 20:41:24.771934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:86128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.510 [2024-10-01 20:41:24.771954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:43:03.510 [2024-10-01 20:41:24.771982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.510 [2024-10-01 20:41:24.772003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:43:03.510 [2024-10-01 20:41:24.772031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:86144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.510 [2024-10-01 20:41:24.772072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:43:03.510 [2024-10-01 20:41:24.772094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:86152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.510 [2024-10-01 20:41:24.772110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:43:03.510 [2024-10-01 20:41:24.772859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:86160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.510 [2024-10-01 20:41:24.772886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:43:03.510 [2024-10-01 20:41:24.772911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:86168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.510 [2024-10-01 20:41:24.772927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:43:03.510 [2024-10-01 20:41:24.772947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.510 [2024-10-01 20:41:24.772962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:43:03.510 [2024-10-01 20:41:24.772982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.510 [2024-10-01 20:41:24.772996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:43:03.510 [2024-10-01 20:41:24.773016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:85608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.510 [2024-10-01 20:41:24.773041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:43:03.510 [2024-10-01 20:41:24.773060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:85616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.510 [2024-10-01 20:41:24.773075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:43:03.510 [2024-10-01 20:41:24.773094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.510 [2024-10-01 20:41:24.773109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:43:03.510 [2024-10-01 20:41:24.773161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:85632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.510 [2024-10-01 20:41:24.773177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:43:03.510 [2024-10-01 20:41:24.773212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.510 [2024-10-01 20:41:24.773231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:43:03.510 [2024-10-01 20:41:24.773253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.510 [2024-10-01 20:41:24.773269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:43:03.510 [2024-10-01 20:41:24.773290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.510 [2024-10-01 20:41:24.773318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:43:03.510 [2024-10-01 20:41:24.773343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:85664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.510 [2024-10-01 20:41:24.773359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:43:03.510 [2024-10-01 20:41:24.773380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:85160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.510 [2024-10-01 20:41:24.773396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:43:03.510 [2024-10-01 20:41:24.773417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:85168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.510 [2024-10-01 20:41:24.773432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:43:03.510 [2024-10-01 20:41:24.773469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:85176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.510 [2024-10-01 20:41:24.773499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:43:03.510 [2024-10-01 20:41:24.773533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:85184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.510 [2024-10-01 20:41:24.773547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:43:03.511 [2024-10-01 20:41:24.773567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:85192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.511 [2024-10-01 20:41:24.773581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:43:03.511 [2024-10-01 20:41:24.773600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:85200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.511 [2024-10-01 20:41:24.773615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:43:03.511 [2024-10-01 20:41:24.773634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:85208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.511 [2024-10-01 20:41:24.773648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:43:03.511 [2024-10-01 20:41:24.773667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:85216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.511 [2024-10-01 20:41:24.773681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:43:03.511 [2024-10-01 20:41:24.773700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:85224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.511 [2024-10-01 20:41:24.773714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:43:03.511 [2024-10-01 20:41:24.773733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:85232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.511 [2024-10-01 20:41:24.773747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:43:03.511 [2024-10-01 20:41:24.773766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:85240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.511 [2024-10-01 20:41:24.773780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:43:03.511 [2024-10-01 20:41:24.773807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:85248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.511 [2024-10-01 20:41:24.773822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:03.511 [2024-10-01 20:41:24.773841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.511 [2024-10-01 20:41:24.773856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:43:03.511 [2024-10-01 20:41:24.773875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:85264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.511 [2024-10-01 20:41:24.773889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:43:03.511 [2024-10-01 20:41:24.773908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:85272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.511 [2024-10-01 20:41:24.773924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:43:03.511 [2024-10-01 20:41:24.773944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:85280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.511 [2024-10-01 20:41:24.773959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:43:03.511 [2024-10-01 20:41:24.773978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:85288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.511 [2024-10-01 20:41:24.773992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:43:03.511 [2024-10-01 20:41:24.774011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:85296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.511 [2024-10-01 20:41:24.774025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:43:03.511 [2024-10-01 20:41:24.774044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:85304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.511 [2024-10-01 20:41:24.774058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:43:03.511 [2024-10-01 20:41:24.774077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:85312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.511 [2024-10-01 20:41:24.774092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:43:03.511 [2024-10-01 20:41:24.774111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:85320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.511 [2024-10-01 20:41:24.774158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:43:03.511 [2024-10-01 20:41:24.774196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:85328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.511 [2024-10-01 20:41:24.774224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:43:03.511 [2024-10-01 20:41:24.774247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:85336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.511 [2024-10-01 20:41:24.774264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:43:03.511 [2024-10-01 20:41:24.774293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:85152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.511 [2024-10-01 20:41:24.774310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:43:03.511 [2024-10-01 20:41:24.774331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.511 [2024-10-01 20:41:24.774347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:43:03.511 [2024-10-01 20:41:24.774369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:85352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.511 [2024-10-01 20:41:24.774386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:43:03.511 [2024-10-01 20:41:24.774408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:85360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.511 [2024-10-01 20:41:24.774424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:43:03.511 [2024-10-01 20:41:24.774446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:85368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.511 [2024-10-01 20:41:24.774462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:43:03.511 [2024-10-01 20:41:24.774513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:85376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.511 [2024-10-01 20:41:24.774543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:03.511 [2024-10-01 20:41:24.774562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.511 [2024-10-01 20:41:24.774594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:43:03.511 [2024-10-01 20:41:24.774614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.511 [2024-10-01 20:41:24.774629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:43:03.511 [2024-10-01 20:41:24.774648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:85400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.511 [2024-10-01 20:41:24.774663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:43:03.511 [2024-10-01 20:41:24.774683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.511 [2024-10-01 20:41:24.774698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:43:03.511 [2024-10-01 20:41:24.774718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:85416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.511 [2024-10-01 20:41:24.774732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:43:03.511 [2024-10-01 20:41:24.774752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.511 [2024-10-01 20:41:24.774766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:43:03.511 [2024-10-01 20:41:24.774786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:85432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.511 [2024-10-01 20:41:24.774807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:43:03.511 [2024-10-01 20:41:24.774828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.511 [2024-10-01 20:41:24.774843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:43:03.511 [2024-10-01 20:41:24.774862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:85448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.511 [2024-10-01 20:41:24.774877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:43:03.511 [2024-10-01 20:41:24.774896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.511 [2024-10-01 20:41:24.774927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:43:03.511 [2024-10-01 20:41:24.774948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:85464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.512 [2024-10-01 20:41:24.774963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:43:03.512 [2024-10-01 20:41:24.774983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.512 [2024-10-01 20:41:24.774998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:43:03.512 [2024-10-01 20:41:24.775018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.512 [2024-10-01 20:41:24.775033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:43:03.512 [2024-10-01 20:41:24.775065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.512 [2024-10-01 20:41:24.775080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:43:03.512 [2024-10-01 20:41:24.775100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.512 [2024-10-01 20:41:24.775114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:03.512 [2024-10-01 20:41:24.775167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.512 [2024-10-01 20:41:24.775183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:43:03.512 [2024-10-01 20:41:24.775205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.512 [2024-10-01 20:41:24.775221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:43:03.512 [2024-10-01 20:41:24.775255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.512 [2024-10-01 20:41:24.775272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:43:03.512 [2024-10-01 20:41:24.775294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.512 [2024-10-01 20:41:24.775317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:43:03.512 [2024-10-01 20:41:24.775340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.512 [2024-10-01 20:41:24.775357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:43:03.512 [2024-10-01 20:41:24.775378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.512 [2024-10-01 20:41:24.775394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:43:03.512 [2024-10-01 20:41:24.775417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.512 [2024-10-01 20:41:24.775449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:43:03.512 [2024-10-01 20:41:24.776096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.512 [2024-10-01 20:41:24.776123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:43:03.512 [2024-10-01 20:41:24.776166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.512 [2024-10-01 20:41:24.776184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:43:03.512 [2024-10-01 20:41:24.776207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.512 [2024-10-01 20:41:24.776224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:43:03.512 [2024-10-01 20:41:24.776246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.512 [2024-10-01 20:41:24.776263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:43:03.512 [2024-10-01 20:41:24.776285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:85672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.512 [2024-10-01 20:41:24.776301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:43:03.512 [2024-10-01 20:41:24.776323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:85680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.512 [2024-10-01 20:41:24.776339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:43:03.512 [2024-10-01 20:41:24.776360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.512 [2024-10-01 20:41:24.776376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:43:03.512 [2024-10-01 20:41:24.776414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:85696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.512 [2024-10-01 20:41:24.776445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:03.512 [2024-10-01 20:41:24.776480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:85704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.512 [2024-10-01 20:41:24.776494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:43:03.512 [2024-10-01 20:41:24.776524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:85712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.512 [2024-10-01 20:41:24.776540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:43:03.512 [2024-10-01 20:41:24.776559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:85720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.512 [2024-10-01 20:41:24.776574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:43:03.512 [2024-10-01 20:41:24.776593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:85728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.512 [2024-10-01 20:41:24.776607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:43:03.512 [2024-10-01 20:41:24.776626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:85736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.512 [2024-10-01 20:41:24.776640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:43:03.512 [2024-10-01 20:41:24.776660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:85744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.512 [2024-10-01 20:41:24.776674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:43:03.512 [2024-10-01 20:41:24.776693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.512 [2024-10-01 20:41:24.776707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:43:03.512 [2024-10-01 20:41:24.776726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:85760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.512 [2024-10-01 20:41:24.776741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:43:03.512 [2024-10-01 20:41:24.776760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:85768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.512 [2024-10-01 20:41:24.776774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:43:03.512 [2024-10-01 20:41:24.776793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.512 [2024-10-01 20:41:24.776808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:43:03.512 [2024-10-01 20:41:24.776827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.512 [2024-10-01 20:41:24.776841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:43:03.512 [2024-10-01 20:41:24.776860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:85792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.512 [2024-10-01 20:41:24.776875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:43:03.512 [2024-10-01 20:41:24.776893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:85800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.512 [2024-10-01 20:41:24.776908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:43:03.513 [2024-10-01 20:41:24.776934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.513 [2024-10-01 20:41:24.776950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:43:03.513 [2024-10-01 20:41:24.776969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:85816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.513 [2024-10-01 20:41:24.776983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:43:03.513 [2024-10-01 20:41:24.777002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:85824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.513 [2024-10-01 20:41:24.777019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.513 [2024-10-01 20:41:24.777040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.513 [2024-10-01 20:41:24.777054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:03.513 [2024-10-01 20:41:24.777074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:85840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.513 [2024-10-01 20:41:24.777088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:43:03.513 [2024-10-01 20:41:24.777107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:85848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.513 [2024-10-01 20:41:24.777139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:43:03.513 [2024-10-01 20:41:24.777176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:85856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.513 [2024-10-01 20:41:24.777207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:43:03.513 [2024-10-01 20:41:24.777231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.513 [2024-10-01 20:41:24.777247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:43:03.513 [2024-10-01 20:41:24.777269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.513 [2024-10-01 20:41:24.777286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:43:03.513 [2024-10-01 20:41:24.777307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:85880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.513 [2024-10-01 20:41:24.777323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:43:03.513 [2024-10-01 20:41:24.777345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:85888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.513 [2024-10-01 20:41:24.777361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:43:03.513 [2024-10-01 20:41:24.777383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.513 [2024-10-01 20:41:24.777399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:43:03.513 [2024-10-01 20:41:24.777420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:85904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.513 [2024-10-01 20:41:24.777446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:43:03.513 [2024-10-01 20:41:24.777485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:85912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.513 [2024-10-01 20:41:24.777530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:43:03.513 [2024-10-01 20:41:24.777567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:85920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.513 [2024-10-01 20:41:24.777582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:43:03.513 [2024-10-01 20:41:24.777601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:85928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.513 [2024-10-01 20:41:24.777616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:43:03.513 [2024-10-01 20:41:24.777635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:85936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.513 [2024-10-01 20:41:24.777650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:43:03.513 [2024-10-01 20:41:24.777668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.513 [2024-10-01 20:41:24.777683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:43:03.513 [2024-10-01 20:41:24.777701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.513 [2024-10-01 20:41:24.777751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:43:03.513 [2024-10-01 20:41:24.777773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:85960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.513 [2024-10-01 20:41:24.777788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:43:03.513 [2024-10-01 20:41:24.777824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:85968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.513 [2024-10-01 20:41:24.777840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:43:03.513 [2024-10-01 20:41:24.777861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:85976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.513 [2024-10-01 20:41:24.777877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:43:03.513 [2024-10-01 20:41:24.777898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:85984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.513 [2024-10-01 20:41:24.777929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:43:03.513 [2024-10-01 20:41:24.777951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:85992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.513 [2024-10-01 20:41:24.777967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:43:03.513 [2024-10-01 20:41:24.777989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:86000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.513 [2024-10-01 20:41:24.778014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:43:03.513 [2024-10-01 20:41:24.778038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.513 [2024-10-01 20:41:24.778054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:43:03.513 [2024-10-01 20:41:24.778076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:86016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.513 [2024-10-01 20:41:24.778092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:43:03.513 [2024-10-01 20:41:24.778114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:86024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.513 [2024-10-01 20:41:24.778147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:43:03.513 [2024-10-01 20:41:24.778170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:86032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.513 [2024-10-01 20:41:24.778186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:43:03.513 [2024-10-01 20:41:24.778209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:86040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.513 [2024-10-01 20:41:24.778225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:43:03.513 [2024-10-01 20:41:24.778262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:86048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.513 [2024-10-01 20:41:24.778282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:43:03.513 [2024-10-01 20:41:24.778305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:86056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.513 [2024-10-01 20:41:24.778322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:43:03.513 [2024-10-01 20:41:24.778345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:86064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.513 [2024-10-01 20:41:24.778361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:43:03.513 [2024-10-01 20:41:24.778384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:86072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.513 [2024-10-01 20:41:24.778400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:43:03.513 [2024-10-01 20:41:24.778423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:86080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.513 [2024-10-01 20:41:24.778456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:43:03.513 [2024-10-01 20:41:24.778510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.513 [2024-10-01 20:41:24.778559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:03.514 [2024-10-01 20:41:24.778580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:86096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.514 [2024-10-01 20:41:24.778595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:43:03.514 [2024-10-01 20:41:24.778623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.514 [2024-10-01 20:41:24.778640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:43:03.514 [2024-10-01 20:41:24.778660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:86112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.514 [2024-10-01 20:41:24.786800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:43:03.514 [2024-10-01 20:41:24.786849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.514 [2024-10-01 20:41:24.786868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:43:03.514 [2024-10-01 20:41:24.786889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:86128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.514 [2024-10-01 20:41:24.786904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:43:03.514 [2024-10-01 20:41:24.786924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:86136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.514 [2024-10-01 20:41:24.786938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:43:03.514 [2024-10-01 20:41:24.786958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:86144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.514 [2024-10-01 20:41:24.786972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:43:03.514 [2024-10-01 20:41:24.787275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:86152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.514 [2024-10-01 20:41:24.787303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:43:03.514 [2024-10-01 20:41:24.787348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:86160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.514 [2024-10-01 20:41:24.787369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:43:03.514 [2024-10-01 20:41:24.787395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:86168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.514 [2024-10-01 20:41:24.787411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:43:03.514 [2024-10-01 20:41:24.787436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.514 [2024-10-01 20:41:24.787467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:43:03.514 [2024-10-01 20:41:24.787504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.514 [2024-10-01 20:41:24.787519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:43:03.514 [2024-10-01 20:41:24.787542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:85608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.514 [2024-10-01 20:41:24.787556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:43:03.514 [2024-10-01 20:41:24.787593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.514 [2024-10-01 20:41:24.787609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:43:03.514 [2024-10-01 20:41:24.787632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.514 [2024-10-01 20:41:24.787647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:43:03.514 [2024-10-01 20:41:24.787671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.514 [2024-10-01 20:41:24.787685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:43:03.514 [2024-10-01 20:41:24.787708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.514 [2024-10-01 20:41:24.787723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:43:03.514 [2024-10-01 20:41:24.787746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:85648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.514 [2024-10-01 20:41:24.787760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:43:03.514 [2024-10-01 20:41:24.787783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.514 [2024-10-01 20:41:24.787798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:43:03.514 [2024-10-01 20:41:24.787820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:85664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.514 [2024-10-01 20:41:24.787866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:43:03.514 [2024-10-01 20:41:24.787895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:85160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.514 [2024-10-01 20:41:24.787912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:43:03.514 [2024-10-01 20:41:24.787939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:85168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.514 [2024-10-01 20:41:24.787956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:43:03.514 [2024-10-01 20:41:24.787982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:85176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.514 [2024-10-01 20:41:24.787999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:43:03.514 [2024-10-01 20:41:24.788026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:85184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.514 [2024-10-01 20:41:24.788042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:43:03.514 [2024-10-01 20:41:24.788069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:85192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.514 [2024-10-01 20:41:24.788085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:43:03.514 [2024-10-01 20:41:24.788112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:85200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.514 [2024-10-01 20:41:24.788169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:43:03.514 [2024-10-01 20:41:24.788231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:85208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.514 [2024-10-01 20:41:24.788247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:43:03.514 [2024-10-01 20:41:24.788272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:85216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.514 [2024-10-01 20:41:24.788287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:43:03.514 [2024-10-01 20:41:24.788311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:85224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.514 [2024-10-01 20:41:24.788327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:43:03.514 [2024-10-01 20:41:24.788351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:85232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.514 [2024-10-01 20:41:24.788366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:43:03.514 [2024-10-01 20:41:24.788390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:85240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.514 [2024-10-01 20:41:24.788406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:43:03.514 [2024-10-01 20:41:24.788444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:85248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.514 [2024-10-01 20:41:24.788459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:03.514 [2024-10-01 20:41:24.788497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:85256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.514 [2024-10-01 20:41:24.788511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:43:03.514 [2024-10-01 20:41:24.788534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:85264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.514 [2024-10-01 20:41:24.788548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:43:03.514 [2024-10-01 20:41:24.788571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:85272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.514 [2024-10-01 20:41:24.788585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:43:03.514 [2024-10-01 20:41:24.788608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:85280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.514 [2024-10-01 20:41:24.788622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:43:03.514 [2024-10-01 20:41:24.788645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:85288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.514 [2024-10-01 20:41:24.788659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:43:03.515 [2024-10-01 20:41:24.788682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:85296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.515 [2024-10-01 20:41:24.788703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:43:03.515 [2024-10-01 20:41:24.788727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:85304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.515 [2024-10-01 20:41:24.788742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:43:03.515 [2024-10-01 20:41:24.788765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:85312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.515 [2024-10-01 20:41:24.788779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:43:03.515 [2024-10-01 20:41:24.788802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:85320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.515 [2024-10-01 20:41:24.788816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:43:03.515 [2024-10-01 20:41:24.788839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:85328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.515 [2024-10-01 20:41:24.788854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:43:03.515 [2024-10-01 20:41:24.788877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:85336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.515 [2024-10-01 20:41:24.788891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:43:03.515 [2024-10-01 20:41:24.788914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:85152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.515 [2024-10-01 20:41:24.788929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:43:03.515 [2024-10-01 20:41:24.788951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:85344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.515 [2024-10-01 20:41:24.788966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:43:03.515 [2024-10-01 20:41:24.788988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.515 [2024-10-01 20:41:24.789003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:43:03.515 [2024-10-01 20:41:24.789026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.515 [2024-10-01 20:41:24.789040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:43:03.515 [2024-10-01 20:41:24.789063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:85368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.515 [2024-10-01 20:41:24.789078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:43:03.515 [2024-10-01 20:41:24.789100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:85376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.515 [2024-10-01 20:41:24.789115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:03.515 [2024-10-01 20:41:24.789155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.515 [2024-10-01 20:41:24.789170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:43:03.515 [2024-10-01 20:41:24.789212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.515 [2024-10-01 20:41:24.789230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:43:03.515 [2024-10-01 20:41:24.789255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.515 [2024-10-01 20:41:24.789271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:43:03.515 [2024-10-01 20:41:24.789295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.515 [2024-10-01 20:41:24.789310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:43:03.515 [2024-10-01 20:41:24.789334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:85416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.515 [2024-10-01 20:41:24.789350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:43:03.515 [2024-10-01 20:41:24.789374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.515 [2024-10-01 20:41:24.789389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:43:03.515 [2024-10-01 20:41:24.789421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:85432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.515 [2024-10-01 20:41:24.789436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:43:03.515 [2024-10-01 20:41:24.789489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.515 [2024-10-01 20:41:24.789504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:43:03.515 [2024-10-01 20:41:24.789527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:85448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.515 [2024-10-01 20:41:24.789541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:43:03.515 [2024-10-01 20:41:24.789564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.515 [2024-10-01 20:41:24.789578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:43:03.515 [2024-10-01 20:41:24.789601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.515 [2024-10-01 20:41:24.789616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:43:03.515 [2024-10-01 20:41:24.789638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.515 [2024-10-01 20:41:24.789653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:43:03.515 [2024-10-01 20:41:24.789675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.515 [2024-10-01 20:41:24.789690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:43:03.515 [2024-10-01 20:41:24.789719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.515 [2024-10-01 20:41:24.789735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:43:03.515 [2024-10-01 20:41:24.789758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:85496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.515 [2024-10-01 20:41:24.789773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:03.515 [2024-10-01 20:41:24.789796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.515 [2024-10-01 20:41:24.789811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:43:03.515 [2024-10-01 20:41:24.789833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.515 [2024-10-01 20:41:24.789848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:43:03.515 [2024-10-01 20:41:24.789870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.515 [2024-10-01 20:41:24.789885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:43:03.515 [2024-10-01 20:41:24.789908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.515 [2024-10-01 20:41:24.789922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:43:03.515 [2024-10-01 20:41:24.789945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.515 [2024-10-01 20:41:24.789959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:43:03.516 [2024-10-01 20:41:24.789982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.516 [2024-10-01 20:41:24.789997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:43:03.516 [2024-10-01 20:41:24.790174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.516 [2024-10-01 20:41:24.790197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:43:03.516 9420.55 IOPS, 36.80 MiB/s 9010.96 IOPS, 35.20 MiB/s 8635.50 IOPS, 33.73 MiB/s 8290.08 IOPS, 32.38 MiB/s 7971.23 IOPS, 31.14 MiB/s 7676.00 IOPS, 29.98 MiB/s 7401.86 IOPS, 28.91 MiB/s 7146.62 IOPS, 27.92 MiB/s 7225.77 IOPS, 28.23 MiB/s 7312.94 IOPS, 28.57 MiB/s 7394.53 IOPS, 28.88 MiB/s 7469.24 IOPS, 29.18 MiB/s 7540.26 IOPS, 29.45 MiB/s 7601.94 IOPS, 29.70 MiB/s [2024-10-01 20:41:38.210573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.516 [2024-10-01 20:41:38.210650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:43:03.516 [2024-10-01 20:41:38.210700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.516 [2024-10-01 20:41:38.210720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:43:03.516 [2024-10-01 20:41:38.210764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:64176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.516 [2024-10-01 20:41:38.210780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:43:03.516 [2024-10-01 20:41:38.210800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.516 [2024-10-01 20:41:38.210814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:43:03.516 [2024-10-01 20:41:38.210834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.516 [2024-10-01 20:41:38.210848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:43:03.516 [2024-10-01 20:41:38.210868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.516 [2024-10-01 20:41:38.210882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:43:03.516 [2024-10-01 20:41:38.210902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.516 [2024-10-01 20:41:38.210916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:43:03.516 [2024-10-01 20:41:38.210937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.516 [2024-10-01 20:41:38.210951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:43:03.516 [2024-10-01 20:41:38.211114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.516 [2024-10-01 20:41:38.211173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.516 [2024-10-01 20:41:38.211192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.516 [2024-10-01 20:41:38.211208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.516 [2024-10-01 20:41:38.211241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.516 [2024-10-01 20:41:38.211257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.516 [2024-10-01 20:41:38.211274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.516 [2024-10-01 20:41:38.211288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.516 [2024-10-01 20:41:38.211304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.516 [2024-10-01 20:41:38.211318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.516 [2024-10-01 20:41:38.211335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.516 [2024-10-01 20:41:38.211349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.516 [2024-10-01 20:41:38.211365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.516 [2024-10-01 20:41:38.211392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.516 [2024-10-01 20:41:38.211409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.516 [2024-10-01 20:41:38.211425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.516 [2024-10-01 20:41:38.211442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.516 [2024-10-01 20:41:38.211473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.516 [2024-10-01 20:41:38.211504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.516 [2024-10-01 20:41:38.211533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.516 [2024-10-01 20:41:38.211548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.516 [2024-10-01 20:41:38.211561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.516 [2024-10-01 20:41:38.211575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.516 [2024-10-01 20:41:38.211588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.516 [2024-10-01 20:41:38.211602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.516 [2024-10-01 20:41:38.211615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.516 [2024-10-01 20:41:38.211629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.516 [2024-10-01 20:41:38.211642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.516 [2024-10-01 20:41:38.211656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.516 [2024-10-01 20:41:38.211669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.516 [2024-10-01 20:41:38.211683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.516 [2024-10-01 20:41:38.211696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.516 [2024-10-01 20:41:38.211724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.516 [2024-10-01 20:41:38.211737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.516 [2024-10-01 20:41:38.211751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.516 [2024-10-01 20:41:38.211763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.516 [2024-10-01 20:41:38.211777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.516 [2024-10-01 20:41:38.211806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.516 [2024-10-01 20:41:38.211870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.516 [2024-10-01 20:41:38.211887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.516 [2024-10-01 20:41:38.211903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.516 [2024-10-01 20:41:38.211918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.516 [2024-10-01 20:41:38.211934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.516 [2024-10-01 20:41:38.211950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.516 [2024-10-01 20:41:38.211966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.516 [2024-10-01 20:41:38.211981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.516 [2024-10-01 20:41:38.211997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.516 [2024-10-01 20:41:38.212012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.516 [2024-10-01 20:41:38.212028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.516 [2024-10-01 20:41:38.212043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.516 [2024-10-01 20:41:38.212061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.516 [2024-10-01 20:41:38.212077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.516 [2024-10-01 20:41:38.212094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.516 [2024-10-01 20:41:38.212109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.516 [2024-10-01 20:41:38.212135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.516 [2024-10-01 20:41:38.212153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.516 [2024-10-01 20:41:38.212170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.516 [2024-10-01 20:41:38.212185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.516 [2024-10-01 20:41:38.212201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.516 [2024-10-01 20:41:38.212216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.516 [2024-10-01 20:41:38.212232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.516 [2024-10-01 20:41:38.212247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.517 [2024-10-01 20:41:38.212264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.517 [2024-10-01 20:41:38.212286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.517 [2024-10-01 20:41:38.212303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.517 [2024-10-01 20:41:38.212318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.517 [2024-10-01 20:41:38.212334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.517 [2024-10-01 20:41:38.212349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.517 [2024-10-01 20:41:38.212366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.517 [2024-10-01 20:41:38.212381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.517 [2024-10-01 20:41:38.212397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.517 [2024-10-01 20:41:38.212412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.517 [2024-10-01 20:41:38.212428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.517 [2024-10-01 20:41:38.212458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.517 [2024-10-01 20:41:38.212472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:64520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.517 [2024-10-01 20:41:38.212485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.517 [2024-10-01 20:41:38.212500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.517 [2024-10-01 20:41:38.212513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.517 [2024-10-01 20:41:38.212527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.517 [2024-10-01 20:41:38.212555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.517 [2024-10-01 20:41:38.212584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.517 [2024-10-01 20:41:38.212597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.517 [2024-10-01 20:41:38.212611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.517 [2024-10-01 20:41:38.212627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.517 [2024-10-01 20:41:38.212641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.517 [2024-10-01 20:41:38.212654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.517 [2024-10-01 20:41:38.212669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.517 [2024-10-01 20:41:38.212682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.517 [2024-10-01 20:41:38.212696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.517 [2024-10-01 20:41:38.212715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.517 [2024-10-01 20:41:38.212730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:64584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.517 [2024-10-01 20:41:38.212758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.517 [2024-10-01 20:41:38.212772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.517 [2024-10-01 20:41:38.212785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.517 [2024-10-01 20:41:38.212800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:63656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.517 [2024-10-01 20:41:38.212813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.517 [2024-10-01 20:41:38.212827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.517 [2024-10-01 20:41:38.212840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.517 [2024-10-01 20:41:38.212854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.517 [2024-10-01 20:41:38.212868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.517 [2024-10-01 20:41:38.212882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.517 [2024-10-01 20:41:38.212895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.517 [2024-10-01 20:41:38.212909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.517 [2024-10-01 20:41:38.212922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.517 [2024-10-01 20:41:38.212936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.517 [2024-10-01 20:41:38.212950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.517 [2024-10-01 20:41:38.212978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.517 [2024-10-01 20:41:38.212991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.517 [2024-10-01 20:41:38.213005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.517 [2024-10-01 20:41:38.213018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.517 [2024-10-01 20:41:38.213031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.517 [2024-10-01 20:41:38.213044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.517 [2024-10-01 20:41:38.213074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.517 [2024-10-01 20:41:38.213087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.517 [2024-10-01 20:41:38.213107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.517 [2024-10-01 20:41:38.213138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.517 [2024-10-01 20:41:38.213171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.517 [2024-10-01 20:41:38.213187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.517 [2024-10-01 20:41:38.213204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.517 [2024-10-01 20:41:38.213219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.517 [2024-10-01 20:41:38.213261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.517 [2024-10-01 20:41:38.213278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.517 [2024-10-01 20:41:38.213304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.517 [2024-10-01 20:41:38.213319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.517 [2024-10-01 20:41:38.213335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:63776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.517 [2024-10-01 20:41:38.213350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.517 [2024-10-01 20:41:38.213366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.517 [2024-10-01 20:41:38.213382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.517 [2024-10-01 20:41:38.213398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.517 [2024-10-01 20:41:38.213413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.517 [2024-10-01 20:41:38.213429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.517 [2024-10-01 20:41:38.213445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.517 [2024-10-01 20:41:38.213461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.517 [2024-10-01 20:41:38.213476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.517 [2024-10-01 20:41:38.213522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.517 [2024-10-01 20:41:38.213535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.517 [2024-10-01 20:41:38.213550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.517 [2024-10-01 20:41:38.213563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.517 [2024-10-01 20:41:38.213577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.517 [2024-10-01 20:41:38.213598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.517 [2024-10-01 20:41:38.213613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.517 [2024-10-01 20:41:38.213626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.517 [2024-10-01 20:41:38.213641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.518 [2024-10-01 20:41:38.213654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.518 [2024-10-01 20:41:38.213668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.518 [2024-10-01 20:41:38.213700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.518 [2024-10-01 20:41:38.213716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.518 [2024-10-01 20:41:38.213730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.518 [2024-10-01 20:41:38.213745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.518 [2024-10-01 20:41:38.213759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.518 [2024-10-01 20:41:38.213774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.518 [2024-10-01 20:41:38.213788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.518 [2024-10-01 20:41:38.213802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.518 [2024-10-01 20:41:38.213816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.518 [2024-10-01 20:41:38.213831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.518 [2024-10-01 20:41:38.213844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.518 [2024-10-01 20:41:38.213860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.518 [2024-10-01 20:41:38.213873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.518 [2024-10-01 20:41:38.213888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.518 [2024-10-01 20:41:38.213902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.518 [2024-10-01 20:41:38.213917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.518 [2024-10-01 20:41:38.213931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.518 [2024-10-01 20:41:38.213946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.518 [2024-10-01 20:41:38.213960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.518 [2024-10-01 20:41:38.213981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.518 [2024-10-01 20:41:38.213995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.518 [2024-10-01 20:41:38.214010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.518 [2024-10-01 20:41:38.214038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.518 [2024-10-01 20:41:38.214053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.518 [2024-10-01 20:41:38.214066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.518 [2024-10-01 20:41:38.214080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.518 [2024-10-01 20:41:38.214094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.518 [2024-10-01 20:41:38.214108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.518 [2024-10-01 20:41:38.214137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.518 [2024-10-01 20:41:38.214170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.518 [2024-10-01 20:41:38.214185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.518 [2024-10-01 20:41:38.214216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.518 [2024-10-01 20:41:38.214236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.518 [2024-10-01 20:41:38.214253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.518 [2024-10-01 20:41:38.214269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.518 [2024-10-01 20:41:38.214285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:63992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.518 [2024-10-01 20:41:38.214300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.518 [2024-10-01 20:41:38.214317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:64000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.518 [2024-10-01 20:41:38.214332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.518 [2024-10-01 20:41:38.214348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.518 [2024-10-01 20:41:38.214363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.518 [2024-10-01 20:41:38.214380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.518 [2024-10-01 20:41:38.214395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.518 [2024-10-01 20:41:38.214412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:64024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.518 [2024-10-01 20:41:38.214437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.518 [2024-10-01 20:41:38.214455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.518 [2024-10-01 20:41:38.214471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.518 [2024-10-01 20:41:38.214487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:64040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.518 [2024-10-01 20:41:38.214531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.518 [2024-10-01 20:41:38.214561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.518 [2024-10-01 20:41:38.214574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.518 [2024-10-01 20:41:38.214588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.518 [2024-10-01 20:41:38.214602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.518 [2024-10-01 20:41:38.214616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:64064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.518 [2024-10-01 20:41:38.214630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.518 [2024-10-01 20:41:38.214644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:64072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.518 [2024-10-01 20:41:38.214657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.518 [2024-10-01 20:41:38.214672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.518 [2024-10-01 20:41:38.214685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.518 [2024-10-01 20:41:38.214699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.519 [2024-10-01 20:41:38.214713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.519 [2024-10-01 20:41:38.214727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.519 [2024-10-01 20:41:38.214741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.519 [2024-10-01 20:41:38.214755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.519 [2024-10-01 20:41:38.214770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.519 [2024-10-01 20:41:38.214785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.519 [2024-10-01 20:41:38.214798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.519 [2024-10-01 20:41:38.214813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.519 [2024-10-01 20:41:38.214826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.519 [2024-10-01 20:41:38.214840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.519 [2024-10-01 20:41:38.214859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.519 [2024-10-01 20:41:38.214876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.519 [2024-10-01 20:41:38.214889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.519 [2024-10-01 20:41:38.214903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.519 [2024-10-01 20:41:38.214917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.519 [2024-10-01 20:41:38.214931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:64664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.519 [2024-10-01 20:41:38.214945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.519 [2024-10-01 20:41:38.214959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:03.519 [2024-10-01 20:41:38.214973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.519 [2024-10-01 20:41:38.214987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.519 [2024-10-01 20:41:38.215001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.519 [2024-10-01 20:41:38.215015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.519 [2024-10-01 20:41:38.215028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.519 [2024-10-01 20:41:38.215043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.519 [2024-10-01 20:41:38.215056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.519 [2024-10-01 20:41:38.215070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.519 [2024-10-01 20:41:38.215084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.519 [2024-10-01 20:41:38.215099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:64128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.519 [2024-10-01 20:41:38.215112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.519 [2024-10-01 20:41:38.215159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:64136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.519 [2024-10-01 20:41:38.215175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.519 [2024-10-01 20:41:38.215203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:64144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.519 [2024-10-01 20:41:38.215220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.519 [2024-10-01 20:41:38.215236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.519 [2024-10-01 20:41:38.215251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.519 [2024-10-01 20:41:38.215511] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x722fd0 was disconnected and freed. reset controller. 00:43:03.519 [2024-10-01 20:41:38.216974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:03.519 [2024-10-01 20:41:38.217050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:03.519 [2024-10-01 20:41:38.217073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:03.519 [2024-10-01 20:41:38.217102] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ab370 (9): Bad file descriptor 00:43:03.519 [2024-10-01 20:41:38.217546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:03.519 [2024-10-01 20:41:38.217576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6ab370 with addr=10.0.0.3, port=4421 00:43:03.519 [2024-10-01 20:41:38.217593] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ab370 is same with the state(6) to be set 00:43:03.519 [2024-10-01 20:41:38.218050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ab370 (9): Bad file descriptor 00:43:03.519 [2024-10-01 20:41:38.218359] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:03.519 [2024-10-01 20:41:38.218386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:03.519 [2024-10-01 20:41:38.218402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:03.519 [2024-10-01 20:41:38.218483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:03.519 [2024-10-01 20:41:38.218518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:03.519 7656.33 IOPS, 29.91 MiB/s 7698.65 IOPS, 30.07 MiB/s 7749.71 IOPS, 30.27 MiB/s 7799.92 IOPS, 30.47 MiB/s 7843.05 IOPS, 30.64 MiB/s 7888.34 IOPS, 30.81 MiB/s 7929.88 IOPS, 30.98 MiB/s 7961.84 IOPS, 31.10 MiB/s 7997.55 IOPS, 31.24 MiB/s 8032.56 IOPS, 31.38 MiB/s [2024-10-01 20:41:48.267102] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:43:03.519 8066.91 IOPS, 31.51 MiB/s 8102.26 IOPS, 31.65 MiB/s 8136.69 IOPS, 31.78 MiB/s 8169.69 IOPS, 31.91 MiB/s 8193.86 IOPS, 32.01 MiB/s 8219.69 IOPS, 32.11 MiB/s 8246.40 IOPS, 32.21 MiB/s 8272.09 IOPS, 32.31 MiB/s 8299.46 IOPS, 32.42 MiB/s 8322.85 IOPS, 32.51 MiB/s Received shutdown signal, test time was about 55.697257 seconds 00:43:03.519 00:43:03.519 Latency(us) 00:43:03.519 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:03.519 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:43:03.519 Verification LBA range: start 0x0 length 0x4000 00:43:03.519 Nvme0n1 : 55.70 8333.09 32.55 0.00 0.00 15334.68 1735.21 7076934.75 00:43:03.519 =================================================================================================================== 00:43:03.519 Total : 8333.09 32.55 0.00 0.00 15334.68 1735.21 7076934.75 00:43:03.519 [2024-10-01 20:41:58.481282] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:43:03.520 20:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:03.780 20:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:43:03.780 20:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:43:03.780 20:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:43:03.780 20:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:43:03.780 20:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:43:03.780 20:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:03.780 20:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:43:03.780 20:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:03.780 20:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:03.780 rmmod nvme_tcp 00:43:03.780 rmmod nvme_fabrics 00:43:03.780 rmmod nvme_keyring 00:43:03.780 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:03.780 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:43:03.780 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:43:03.780 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@515 -- # '[' -n 95233 ']' 00:43:03.780 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # killprocess 95233 00:43:03.780 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 95233 ']' 00:43:03.780 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 95233 00:43:03.780 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:43:03.780 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:03.780 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95233 00:43:04.038 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:43:04.038 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:43:04.038 killing process with pid 95233 00:43:04.038 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95233' 00:43:04.038 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 95233 00:43:04.038 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 95233 00:43:04.038 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:43:04.038 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:43:04.038 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:43:04.038 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:43:04.038 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:43:04.038 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@789 -- # iptables-save 00:43:04.038 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:43:04.038 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:04.038 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:43:04.038 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:43:04.038 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:43:04.297 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:43:04.297 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:43:04.297 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:43:04.297 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:43:04.297 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:43:04.297 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:43:04.297 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:43:04.297 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:43:04.297 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:43:04.297 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:43:04.297 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:43:04.297 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:43:04.297 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:04.297 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:04.297 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:04.297 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:43:04.297 00:43:04.297 real 1m0.986s 00:43:04.297 user 2m53.080s 00:43:04.297 sys 0m13.158s 00:43:04.297 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:04.297 20:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:43:04.297 ************************************ 00:43:04.297 END TEST nvmf_host_multipath 00:43:04.297 ************************************ 00:43:04.297 20:41:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:43:04.297 20:41:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:43:04.297 20:41:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:04.297 20:41:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:43:04.555 ************************************ 00:43:04.555 START TEST nvmf_timeout 00:43:04.555 ************************************ 00:43:04.555 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:43:04.555 * Looking for test storage... 00:43:04.555 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:43:04.555 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:43:04.555 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:43:04.555 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:43:04.555 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:43:04.555 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:04.555 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:04.555 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:04.555 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:43:04.555 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:43:04.555 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:43:04.555 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:43:04.555 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:43:04.555 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:43:04.555 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:43:04.555 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:04.555 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:43:04.555 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:43:04.555 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:04.555 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:04.555 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:43:04.555 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:43:04.555 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:04.555 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:43:04.555 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:43:04.555 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:43:04.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:04.556 --rc genhtml_branch_coverage=1 00:43:04.556 --rc genhtml_function_coverage=1 00:43:04.556 --rc genhtml_legend=1 00:43:04.556 --rc geninfo_all_blocks=1 00:43:04.556 --rc geninfo_unexecuted_blocks=1 00:43:04.556 00:43:04.556 ' 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:43:04.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:04.556 --rc genhtml_branch_coverage=1 00:43:04.556 --rc genhtml_function_coverage=1 00:43:04.556 --rc genhtml_legend=1 00:43:04.556 --rc geninfo_all_blocks=1 00:43:04.556 --rc geninfo_unexecuted_blocks=1 00:43:04.556 00:43:04.556 ' 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:43:04.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:04.556 --rc genhtml_branch_coverage=1 00:43:04.556 --rc genhtml_function_coverage=1 00:43:04.556 --rc genhtml_legend=1 00:43:04.556 --rc geninfo_all_blocks=1 00:43:04.556 --rc geninfo_unexecuted_blocks=1 00:43:04.556 00:43:04.556 ' 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:43:04.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:04.556 --rc genhtml_branch_coverage=1 00:43:04.556 --rc genhtml_function_coverage=1 00:43:04.556 --rc genhtml_legend=1 00:43:04.556 --rc geninfo_all_blocks=1 00:43:04.556 --rc geninfo_unexecuted_blocks=1 00:43:04.556 00:43:04.556 ' 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:04.556 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # prepare_net_devs 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@436 -- # local -g is_hw=no 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # remove_spdk_ns 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@458 -- # nvmf_veth_init 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:43:04.556 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:43:04.557 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:43:04.557 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:43:04.557 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:43:04.557 Cannot find device "nvmf_init_br" 00:43:04.557 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:43:04.557 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:43:04.557 Cannot find device "nvmf_init_br2" 00:43:04.557 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:43:04.557 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:43:04.815 Cannot find device "nvmf_tgt_br" 00:43:04.815 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:43:04.815 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:43:04.815 Cannot find device "nvmf_tgt_br2" 00:43:04.815 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:43:04.815 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:43:04.815 Cannot find device "nvmf_init_br" 00:43:04.815 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:43:04.815 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:43:04.815 Cannot find device "nvmf_init_br2" 00:43:04.815 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:43:04.815 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:43:04.815 Cannot find device "nvmf_tgt_br" 00:43:04.815 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:43:04.815 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:43:04.815 Cannot find device "nvmf_tgt_br2" 00:43:04.815 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:43:04.815 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:43:04.815 Cannot find device "nvmf_br" 00:43:04.815 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:43:04.815 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:43:04.815 Cannot find device "nvmf_init_if" 00:43:04.815 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:43:04.815 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:43:04.815 Cannot find device "nvmf_init_if2" 00:43:04.815 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:43:04.815 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:43:04.815 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:43:04.815 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:43:04.815 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:43:04.815 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:43:04.815 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:43:04.815 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:43:04.815 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:43:04.815 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:43:04.815 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:43:04.815 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:43:04.815 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:43:04.815 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:43:04.815 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:43:04.815 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:43:04.815 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:43:04.815 20:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:43:04.815 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:43:04.815 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:43:04.815 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:43:04.815 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:43:04.815 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:43:04.815 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:43:04.815 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:43:04.815 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:43:04.815 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:43:04.815 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:43:04.815 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:43:04.815 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:43:05.074 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:43:05.074 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:43:05.074 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:43:05.074 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:43:05.074 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:43:05.074 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:43:05.074 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:43:05.074 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:43:05.074 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:43:05.074 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:43:05.074 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:43:05.074 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:43:05.074 00:43:05.074 --- 10.0.0.3 ping statistics --- 00:43:05.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:05.074 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:43:05.074 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:43:05.074 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:43:05.074 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:43:05.074 00:43:05.074 --- 10.0.0.4 ping statistics --- 00:43:05.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:05.074 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:43:05.074 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:43:05.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:05.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:43:05.074 00:43:05.074 --- 10.0.0.1 ping statistics --- 00:43:05.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:05.074 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:43:05.074 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:43:05.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:05.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:43:05.074 00:43:05.074 --- 10.0.0.2 ping statistics --- 00:43:05.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:05.074 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:43:05.074 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:05.074 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # return 0 00:43:05.074 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:43:05.074 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:05.074 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:43:05.074 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:43:05.074 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:05.074 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:43:05.074 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:43:05.074 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:43:05.074 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:43:05.074 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:05.074 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:43:05.075 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # nvmfpid=96641 00:43:05.075 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:43:05.075 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # waitforlisten 96641 00:43:05.075 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 96641 ']' 00:43:05.075 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:05.075 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:05.075 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:05.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:05.075 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:05.075 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:43:05.075 [2024-10-01 20:42:00.248729] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:43:05.075 [2024-10-01 20:42:00.248817] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:05.333 [2024-10-01 20:42:00.388595] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:05.333 [2024-10-01 20:42:00.441248] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:05.333 [2024-10-01 20:42:00.441312] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:05.333 [2024-10-01 20:42:00.441321] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:05.333 [2024-10-01 20:42:00.441328] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:05.333 [2024-10-01 20:42:00.441334] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:05.333 [2024-10-01 20:42:00.442471] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:43:05.333 [2024-10-01 20:42:00.442487] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:43:05.333 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:05.333 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:43:05.333 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:43:05.333 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:05.333 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:43:05.592 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:05.592 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:43:05.592 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:43:05.850 [2024-10-01 20:42:00.901782] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:05.850 20:42:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:43:06.109 Malloc0 00:43:06.109 20:42:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:43:06.368 20:42:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:06.627 20:42:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:43:06.886 [2024-10-01 20:42:01.980878] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:43:06.886 20:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=96717 00:43:06.886 20:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:43:06.886 20:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 96717 /var/tmp/bdevperf.sock 00:43:06.886 20:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 96717 ']' 00:43:06.886 20:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:43:06.886 20:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:06.886 20:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:43:06.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:43:06.886 20:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:06.886 20:42:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:43:06.886 [2024-10-01 20:42:02.056834] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:43:06.886 [2024-10-01 20:42:02.056926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96717 ] 00:43:07.145 [2024-10-01 20:42:02.192800] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:07.145 [2024-10-01 20:42:02.251756] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:43:08.085 20:42:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:08.085 20:42:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:43:08.085 20:42:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:43:08.085 20:42:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:43:08.345 NVMe0n1 00:43:08.604 20:42:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=96766 00:43:08.604 20:42:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:43:08.604 20:42:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:43:08.604 Running I/O for 10 seconds... 00:43:09.540 20:42:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:43:09.801 10227.00 IOPS, 39.95 MiB/s [2024-10-01 20:42:04.880634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.880692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.880703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.880710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.880717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.880724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.880731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.880739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.880746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.880752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.880761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.880768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.880775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.880781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.880788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.880795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.880801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.880808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.880815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.880822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.880829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.880836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.880843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.880850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.880856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.880864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.880871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.880877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.880884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.880891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.880898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.880904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.880911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.880918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.880941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.880964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.880971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.880979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.880986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.881010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.881018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.881025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.881032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.881040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.881047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.881056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.881068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.881075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.881083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.881091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.801 [2024-10-01 20:42:04.881098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.802 [2024-10-01 20:42:04.881106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.802 [2024-10-01 20:42:04.881130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.802 [2024-10-01 20:42:04.881155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.802 [2024-10-01 20:42:04.881163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.802 [2024-10-01 20:42:04.881171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.802 [2024-10-01 20:42:04.881179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.802 [2024-10-01 20:42:04.881187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.802 [2024-10-01 20:42:04.881195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.802 [2024-10-01 20:42:04.881216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.802 [2024-10-01 20:42:04.881225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.802 [2024-10-01 20:42:04.881233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.802 [2024-10-01 20:42:04.881241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.802 [2024-10-01 20:42:04.881250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.802 [2024-10-01 20:42:04.881258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.802 [2024-10-01 20:42:04.881266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.802 [2024-10-01 20:42:04.881274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.802 [2024-10-01 20:42:04.881282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.802 [2024-10-01 20:42:04.881290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.802 [2024-10-01 20:42:04.881298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9c70 is same with the state(6) to be set 00:43:09.802 [2024-10-01 20:42:04.882181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.802 [2024-10-01 20:42:04.882232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.802 [2024-10-01 20:42:04.882253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.802 [2024-10-01 20:42:04.882265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.802 [2024-10-01 20:42:04.882277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.802 [2024-10-01 20:42:04.882287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.802 [2024-10-01 20:42:04.882299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.802 [2024-10-01 20:42:04.882309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.802 [2024-10-01 20:42:04.882320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.802 [2024-10-01 20:42:04.882330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.802 [2024-10-01 20:42:04.882342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.802 [2024-10-01 20:42:04.882352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.802 [2024-10-01 20:42:04.882363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:94128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.802 [2024-10-01 20:42:04.882373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.802 [2024-10-01 20:42:04.882385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.802 [2024-10-01 20:42:04.882395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.802 [2024-10-01 20:42:04.882406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.802 [2024-10-01 20:42:04.882416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.802 [2024-10-01 20:42:04.882428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.802 [2024-10-01 20:42:04.882437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.802 [2024-10-01 20:42:04.882449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.802 [2024-10-01 20:42:04.882459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.802 [2024-10-01 20:42:04.882470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.802 [2024-10-01 20:42:04.882480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.802 [2024-10-01 20:42:04.882492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.802 [2024-10-01 20:42:04.882501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.802 [2024-10-01 20:42:04.882529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:94184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.802 [2024-10-01 20:42:04.882539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.802 [2024-10-01 20:42:04.882550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.802 [2024-10-01 20:42:04.882559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.802 [2024-10-01 20:42:04.882570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.802 [2024-10-01 20:42:04.882580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.802 [2024-10-01 20:42:04.882591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.802 [2024-10-01 20:42:04.882602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.802 [2024-10-01 20:42:04.882628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:94216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.802 [2024-10-01 20:42:04.882638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.802 [2024-10-01 20:42:04.882649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:94224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.802 [2024-10-01 20:42:04.882657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.802 [2024-10-01 20:42:04.882669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.802 [2024-10-01 20:42:04.882678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.802 [2024-10-01 20:42:04.882689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:94240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.802 [2024-10-01 20:42:04.882698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.802 [2024-10-01 20:42:04.882709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.802 [2024-10-01 20:42:04.882718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.802 [2024-10-01 20:42:04.882728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.802 [2024-10-01 20:42:04.882738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.802 [2024-10-01 20:42:04.882749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:94264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.802 [2024-10-01 20:42:04.882758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.802 [2024-10-01 20:42:04.882769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.803 [2024-10-01 20:42:04.882778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.803 [2024-10-01 20:42:04.882789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.803 [2024-10-01 20:42:04.882799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.803 [2024-10-01 20:42:04.882810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.803 [2024-10-01 20:42:04.882819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.803 [2024-10-01 20:42:04.882830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.803 [2024-10-01 20:42:04.882839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.803 [2024-10-01 20:42:04.882850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.803 [2024-10-01 20:42:04.882859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.803 [2024-10-01 20:42:04.882869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.803 [2024-10-01 20:42:04.882879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.803 [2024-10-01 20:42:04.882889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.803 [2024-10-01 20:42:04.882898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.803 [2024-10-01 20:42:04.882909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.803 [2024-10-01 20:42:04.882919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.803 [2024-10-01 20:42:04.882929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.803 [2024-10-01 20:42:04.882939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.803 [2024-10-01 20:42:04.882950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.803 [2024-10-01 20:42:04.882959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.803 [2024-10-01 20:42:04.882970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.803 [2024-10-01 20:42:04.882979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.803 [2024-10-01 20:42:04.882989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.803 [2024-10-01 20:42:04.882999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.803 [2024-10-01 20:42:04.883009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.803 [2024-10-01 20:42:04.883018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.803 [2024-10-01 20:42:04.883029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.803 [2024-10-01 20:42:04.883038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.803 [2024-10-01 20:42:04.883049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.803 [2024-10-01 20:42:04.883058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.803 [2024-10-01 20:42:04.883069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.803 [2024-10-01 20:42:04.883078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.803 [2024-10-01 20:42:04.883089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.803 [2024-10-01 20:42:04.883098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.803 [2024-10-01 20:42:04.883108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.803 [2024-10-01 20:42:04.883134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.803 [2024-10-01 20:42:04.883162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.803 [2024-10-01 20:42:04.883183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.803 [2024-10-01 20:42:04.883196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.803 [2024-10-01 20:42:04.883211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.803 [2024-10-01 20:42:04.883223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.803 [2024-10-01 20:42:04.883233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.803 [2024-10-01 20:42:04.883244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.803 [2024-10-01 20:42:04.883254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.803 [2024-10-01 20:42:04.883265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.803 [2024-10-01 20:42:04.883275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.803 [2024-10-01 20:42:04.883286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.803 [2024-10-01 20:42:04.883296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.803 [2024-10-01 20:42:04.883307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.803 [2024-10-01 20:42:04.883318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.803 [2024-10-01 20:42:04.883329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.803 [2024-10-01 20:42:04.883339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.803 [2024-10-01 20:42:04.883350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.803 [2024-10-01 20:42:04.883361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.803 [2024-10-01 20:42:04.883372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.803 [2024-10-01 20:42:04.883382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.803 [2024-10-01 20:42:04.883393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.803 [2024-10-01 20:42:04.883403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.803 [2024-10-01 20:42:04.883414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.803 [2024-10-01 20:42:04.883424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.803 [2024-10-01 20:42:04.883436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.803 [2024-10-01 20:42:04.883445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.803 [2024-10-01 20:42:04.883457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.803 [2024-10-01 20:42:04.883467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.803 [2024-10-01 20:42:04.883478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.803 [2024-10-01 20:42:04.883487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.803 [2024-10-01 20:42:04.883499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.803 [2024-10-01 20:42:04.883509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.804 [2024-10-01 20:42:04.883535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.804 [2024-10-01 20:42:04.883544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.804 [2024-10-01 20:42:04.883555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.804 [2024-10-01 20:42:04.883566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.804 [2024-10-01 20:42:04.883577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.804 [2024-10-01 20:42:04.883586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.804 [2024-10-01 20:42:04.883596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.804 [2024-10-01 20:42:04.883605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.804 [2024-10-01 20:42:04.883616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.804 [2024-10-01 20:42:04.883625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.804 [2024-10-01 20:42:04.883636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.804 [2024-10-01 20:42:04.883645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.804 [2024-10-01 20:42:04.883656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.804 [2024-10-01 20:42:04.883665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.804 [2024-10-01 20:42:04.883676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.804 [2024-10-01 20:42:04.883686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.804 [2024-10-01 20:42:04.883696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:94296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.804 [2024-10-01 20:42:04.883706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.804 [2024-10-01 20:42:04.883717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:94304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.804 [2024-10-01 20:42:04.883725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.804 [2024-10-01 20:42:04.883736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:94312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.804 [2024-10-01 20:42:04.883745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.804 [2024-10-01 20:42:04.883756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.804 [2024-10-01 20:42:04.883765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.804 [2024-10-01 20:42:04.883776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.804 [2024-10-01 20:42:04.883785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.804 [2024-10-01 20:42:04.883796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.804 [2024-10-01 20:42:04.883805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.804 [2024-10-01 20:42:04.883816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.804 [2024-10-01 20:42:04.883824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.804 [2024-10-01 20:42:04.883835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:94352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.804 [2024-10-01 20:42:04.883870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.804 [2024-10-01 20:42:04.883883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.804 [2024-10-01 20:42:04.883893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.804 [2024-10-01 20:42:04.883905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:94368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.804 [2024-10-01 20:42:04.883918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.804 [2024-10-01 20:42:04.883930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.804 [2024-10-01 20:42:04.883939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.804 [2024-10-01 20:42:04.883951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:94384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.804 [2024-10-01 20:42:04.883960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.804 [2024-10-01 20:42:04.883972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:94392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.804 [2024-10-01 20:42:04.883981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.804 [2024-10-01 20:42:04.883993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:94400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.804 [2024-10-01 20:42:04.884002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.804 [2024-10-01 20:42:04.884014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.804 [2024-10-01 20:42:04.884023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.804 [2024-10-01 20:42:04.884035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.804 [2024-10-01 20:42:04.884045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.804 [2024-10-01 20:42:04.884056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.804 [2024-10-01 20:42:04.884066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.804 [2024-10-01 20:42:04.884077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.804 [2024-10-01 20:42:04.884087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.804 [2024-10-01 20:42:04.884098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.804 [2024-10-01 20:42:04.884108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.804 [2024-10-01 20:42:04.884119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.804 [2024-10-01 20:42:04.884139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.804 [2024-10-01 20:42:04.884152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:94768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.804 [2024-10-01 20:42:04.884161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.804 [2024-10-01 20:42:04.884173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.804 [2024-10-01 20:42:04.884182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.804 [2024-10-01 20:42:04.884194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.804 [2024-10-01 20:42:04.884204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.804 [2024-10-01 20:42:04.884216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.804 [2024-10-01 20:42:04.884225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.804 [2024-10-01 20:42:04.884237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.804 [2024-10-01 20:42:04.884246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.804 [2024-10-01 20:42:04.884258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:94808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.805 [2024-10-01 20:42:04.884269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.805 [2024-10-01 20:42:04.884281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.805 [2024-10-01 20:42:04.884290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.805 [2024-10-01 20:42:04.884302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.805 [2024-10-01 20:42:04.884311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.805 [2024-10-01 20:42:04.884323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.805 [2024-10-01 20:42:04.884332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.805 [2024-10-01 20:42:04.884344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:94840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.805 [2024-10-01 20:42:04.884353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.805 [2024-10-01 20:42:04.884365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.805 [2024-10-01 20:42:04.884377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.805 [2024-10-01 20:42:04.884389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.805 [2024-10-01 20:42:04.884398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.805 [2024-10-01 20:42:04.884410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:94864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.805 [2024-10-01 20:42:04.884419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.805 [2024-10-01 20:42:04.884431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.805 [2024-10-01 20:42:04.884440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.805 [2024-10-01 20:42:04.884467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:94880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.805 [2024-10-01 20:42:04.884476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.805 [2024-10-01 20:42:04.884487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.805 [2024-10-01 20:42:04.884496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.805 [2024-10-01 20:42:04.884506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.805 [2024-10-01 20:42:04.884515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.805 [2024-10-01 20:42:04.884526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:94904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.805 [2024-10-01 20:42:04.884535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.805 [2024-10-01 20:42:04.884545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:94912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.805 [2024-10-01 20:42:04.884554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.805 [2024-10-01 20:42:04.884564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.805 [2024-10-01 20:42:04.884573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.805 [2024-10-01 20:42:04.884584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:94928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.805 [2024-10-01 20:42:04.884593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.805 [2024-10-01 20:42:04.884603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.805 [2024-10-01 20:42:04.884614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.805 [2024-10-01 20:42:04.884625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.805 [2024-10-01 20:42:04.884634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.805 [2024-10-01 20:42:04.884644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.805 [2024-10-01 20:42:04.884653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.805 [2024-10-01 20:42:04.884664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:94960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.805 [2024-10-01 20:42:04.884673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.805 [2024-10-01 20:42:04.884683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.805 [2024-10-01 20:42:04.884692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.805 [2024-10-01 20:42:04.884703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.805 [2024-10-01 20:42:04.884713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.805 [2024-10-01 20:42:04.884725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.805 [2024-10-01 20:42:04.884734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.805 [2024-10-01 20:42:04.884744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.805 [2024-10-01 20:42:04.884753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.805 [2024-10-01 20:42:04.884764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.805 [2024-10-01 20:42:04.884773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.805 [2024-10-01 20:42:04.884784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.805 [2024-10-01 20:42:04.884793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.805 [2024-10-01 20:42:04.884804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.805 [2024-10-01 20:42:04.884812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.805 [2024-10-01 20:42:04.884823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.805 [2024-10-01 20:42:04.884832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.805 [2024-10-01 20:42:04.884843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.805 [2024-10-01 20:42:04.884852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.805 [2024-10-01 20:42:04.884862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.805 [2024-10-01 20:42:04.884871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.805 [2024-10-01 20:42:04.884882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.805 [2024-10-01 20:42:04.884891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.805 [2024-10-01 20:42:04.884902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.805 [2024-10-01 20:42:04.884911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.805 [2024-10-01 20:42:04.884921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.805 [2024-10-01 20:42:04.884932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.805 [2024-10-01 20:42:04.884943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.805 [2024-10-01 20:42:04.884952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.805 [2024-10-01 20:42:04.884963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.805 [2024-10-01 20:42:04.884972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.805 [2024-10-01 20:42:04.884983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:09.806 [2024-10-01 20:42:04.884991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.806 [2024-10-01 20:42:04.885020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:09.806 [2024-10-01 20:42:04.885030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:09.806 [2024-10-01 20:42:04.885039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95096 len:8 PRP1 0x0 PRP2 0x0 00:43:09.806 [2024-10-01 20:42:04.885050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:09.806 [2024-10-01 20:42:04.885090] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc53750 was disconnected and freed. reset controller. 00:43:09.806 [2024-10-01 20:42:04.885380] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:09.806 [2024-10-01 20:42:04.885453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe1870 (9): Bad file descriptor 00:43:09.806 [2024-10-01 20:42:04.885601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:09.806 [2024-10-01 20:42:04.885622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe1870 with addr=10.0.0.3, port=4420 00:43:09.806 [2024-10-01 20:42:04.885633] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1870 is same with the state(6) to be set 00:43:09.806 [2024-10-01 20:42:04.885651] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe1870 (9): Bad file descriptor 00:43:09.806 [2024-10-01 20:42:04.885666] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:09.806 [2024-10-01 20:42:04.885676] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:09.806 [2024-10-01 20:42:04.885686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:09.806 [2024-10-01 20:42:04.885704] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:09.806 [2024-10-01 20:42:04.885715] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:09.806 20:42:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:43:11.671 5880.00 IOPS, 22.97 MiB/s 3920.00 IOPS, 15.31 MiB/s [2024-10-01 20:42:06.885810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:11.671 [2024-10-01 20:42:06.885867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe1870 with addr=10.0.0.3, port=4420 00:43:11.671 [2024-10-01 20:42:06.885880] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1870 is same with the state(6) to be set 00:43:11.671 [2024-10-01 20:42:06.885901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe1870 (9): Bad file descriptor 00:43:11.671 [2024-10-01 20:42:06.885918] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:11.671 [2024-10-01 20:42:06.885926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:11.671 [2024-10-01 20:42:06.885935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:11.671 [2024-10-01 20:42:06.885958] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:11.671 [2024-10-01 20:42:06.885968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:11.671 20:42:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:43:11.671 20:42:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:43:11.671 20:42:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:43:12.237 20:42:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:43:12.237 20:42:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:43:12.237 20:42:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:43:12.237 20:42:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:43:12.237 20:42:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:43:12.237 20:42:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:43:13.871 2940.00 IOPS, 11.48 MiB/s 2352.00 IOPS, 9.19 MiB/s [2024-10-01 20:42:08.886258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:13.871 [2024-10-01 20:42:08.886306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe1870 with addr=10.0.0.3, port=4420 00:43:13.871 [2024-10-01 20:42:08.886321] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1870 is same with the state(6) to be set 00:43:13.871 [2024-10-01 20:42:08.886343] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe1870 (9): Bad file descriptor 00:43:13.871 [2024-10-01 20:42:08.886360] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:13.871 [2024-10-01 20:42:08.886369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:13.871 [2024-10-01 20:42:08.886379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:13.871 [2024-10-01 20:42:08.886402] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:13.871 [2024-10-01 20:42:08.886412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:15.794 1960.00 IOPS, 7.66 MiB/s 1680.00 IOPS, 6.56 MiB/s [2024-10-01 20:42:10.886550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:15.794 [2024-10-01 20:42:10.886593] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:15.794 [2024-10-01 20:42:10.886610] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:15.794 [2024-10-01 20:42:10.886623] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:43:15.794 [2024-10-01 20:42:10.886656] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:16.731 1470.00 IOPS, 5.74 MiB/s 00:43:16.731 Latency(us) 00:43:16.731 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:16.731 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:43:16.731 Verification LBA range: start 0x0 length 0x4000 00:43:16.731 NVMe0n1 : 8.15 1442.52 5.63 15.70 0.00 87657.99 1966.08 7015926.69 00:43:16.731 =================================================================================================================== 00:43:16.731 Total : 1442.52 5.63 15.70 0.00 87657.99 1966.08 7015926.69 00:43:16.731 { 00:43:16.731 "results": [ 00:43:16.731 { 00:43:16.731 "job": "NVMe0n1", 00:43:16.731 "core_mask": "0x4", 00:43:16.731 "workload": "verify", 00:43:16.731 "status": "finished", 00:43:16.731 "verify_range": { 00:43:16.731 "start": 0, 00:43:16.731 "length": 16384 00:43:16.731 }, 00:43:16.731 "queue_depth": 128, 00:43:16.731 "io_size": 4096, 00:43:16.731 "runtime": 8.152419, 00:43:16.731 "iops": 1442.5166321799702, 00:43:16.731 "mibps": 5.634830594453009, 00:43:16.731 "io_failed": 128, 00:43:16.731 "io_timeout": 0, 00:43:16.731 "avg_latency_us": 87657.99145479017, 00:43:16.731 "min_latency_us": 1966.08, 00:43:16.731 "max_latency_us": 7015926.69090909 00:43:16.731 } 00:43:16.731 ], 00:43:16.731 "core_count": 1 00:43:16.731 } 00:43:17.300 20:42:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:43:17.300 20:42:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:43:17.300 20:42:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:43:17.559 20:42:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:43:17.559 20:42:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:43:17.559 20:42:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:43:17.559 20:42:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:43:17.818 20:42:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:43:17.818 20:42:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 96766 00:43:17.818 20:42:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 96717 00:43:17.818 20:42:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 96717 ']' 00:43:17.818 20:42:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 96717 00:43:17.818 20:42:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:43:17.818 20:42:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:17.818 20:42:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96717 00:43:17.818 killing process with pid 96717 00:43:17.818 Received shutdown signal, test time was about 9.254738 seconds 00:43:17.818 00:43:17.818 Latency(us) 00:43:17.818 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:17.818 =================================================================================================================== 00:43:17.818 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:17.818 20:42:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:43:17.818 20:42:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:43:17.818 20:42:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96717' 00:43:17.818 20:42:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 96717 00:43:17.818 20:42:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 96717 00:43:18.078 20:42:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:43:18.337 [2024-10-01 20:42:13.372367] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:43:18.337 20:42:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:43:18.337 20:42:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=96919 00:43:18.337 20:42:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 96919 /var/tmp/bdevperf.sock 00:43:18.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:43:18.337 20:42:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 96919 ']' 00:43:18.337 20:42:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:43:18.337 20:42:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:18.337 20:42:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:43:18.337 20:42:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:18.337 20:42:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:43:18.337 [2024-10-01 20:42:13.430576] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:43:18.337 [2024-10-01 20:42:13.431157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96919 ] 00:43:18.337 [2024-10-01 20:42:13.564450] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:18.597 [2024-10-01 20:42:13.618165] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:43:19.165 20:42:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:19.424 20:42:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:43:19.424 20:42:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:43:19.683 20:42:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:43:19.942 NVMe0n1 00:43:19.942 20:42:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:43:19.942 20:42:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=96968 00:43:19.942 20:42:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:43:20.200 Running I/O for 10 seconds... 00:43:21.137 20:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:43:21.137 10319.00 IOPS, 40.31 MiB/s [2024-10-01 20:42:16.325559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.137 [2024-10-01 20:42:16.325619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.137 [2024-10-01 20:42:16.325644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.137 [2024-10-01 20:42:16.325652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.137 [2024-10-01 20:42:16.325659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.137 [2024-10-01 20:42:16.325667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.137 [2024-10-01 20:42:16.325674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.137 [2024-10-01 20:42:16.325681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.137 [2024-10-01 20:42:16.325688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.137 [2024-10-01 20:42:16.325695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.137 [2024-10-01 20:42:16.325701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.137 [2024-10-01 20:42:16.325708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.137 [2024-10-01 20:42:16.325715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.137 [2024-10-01 20:42:16.325722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.137 [2024-10-01 20:42:16.325729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.137 [2024-10-01 20:42:16.325736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.137 [2024-10-01 20:42:16.325742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.137 [2024-10-01 20:42:16.325749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.137 [2024-10-01 20:42:16.325756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.137 [2024-10-01 20:42:16.325763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.137 [2024-10-01 20:42:16.325770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.137 [2024-10-01 20:42:16.325777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.137 [2024-10-01 20:42:16.325784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.137 [2024-10-01 20:42:16.325790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.137 [2024-10-01 20:42:16.325797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.137 [2024-10-01 20:42:16.325804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.137 [2024-10-01 20:42:16.325810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.138 [2024-10-01 20:42:16.325817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.138 [2024-10-01 20:42:16.325824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.138 [2024-10-01 20:42:16.325846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.138 [2024-10-01 20:42:16.325853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.138 [2024-10-01 20:42:16.325876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.138 [2024-10-01 20:42:16.325902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.138 [2024-10-01 20:42:16.325911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.138 [2024-10-01 20:42:16.325919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.138 [2024-10-01 20:42:16.325926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.138 [2024-10-01 20:42:16.325934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.138 [2024-10-01 20:42:16.325942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.138 [2024-10-01 20:42:16.325950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.138 [2024-10-01 20:42:16.325957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.138 [2024-10-01 20:42:16.325964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.138 [2024-10-01 20:42:16.325972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.138 [2024-10-01 20:42:16.325979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.138 [2024-10-01 20:42:16.325987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb59840 is same with the state(6) to be set 00:43:21.138 [2024-10-01 20:42:16.327347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:92536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.138 [2024-10-01 20:42:16.327405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.138 [2024-10-01 20:42:16.327426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:92544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.138 [2024-10-01 20:42:16.327437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.138 [2024-10-01 20:42:16.327449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:92552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.138 [2024-10-01 20:42:16.327459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.138 [2024-10-01 20:42:16.327471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:92560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.138 [2024-10-01 20:42:16.327480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.138 [2024-10-01 20:42:16.327506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:92568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.138 [2024-10-01 20:42:16.327515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.138 [2024-10-01 20:42:16.327541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:92576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.138 [2024-10-01 20:42:16.327565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.138 [2024-10-01 20:42:16.327575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:92584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.138 [2024-10-01 20:42:16.327583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.138 [2024-10-01 20:42:16.327593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:92592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.138 [2024-10-01 20:42:16.327601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.138 [2024-10-01 20:42:16.327611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.138 [2024-10-01 20:42:16.327620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.138 [2024-10-01 20:42:16.327630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:92808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.138 [2024-10-01 20:42:16.327638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.138 [2024-10-01 20:42:16.327664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:92816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.138 [2024-10-01 20:42:16.327673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.138 [2024-10-01 20:42:16.327683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.138 [2024-10-01 20:42:16.327691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.138 [2024-10-01 20:42:16.327701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:92832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.138 [2024-10-01 20:42:16.327710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.138 [2024-10-01 20:42:16.327720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:92840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.138 [2024-10-01 20:42:16.327729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.138 [2024-10-01 20:42:16.327739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:92848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.138 [2024-10-01 20:42:16.327747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.138 [2024-10-01 20:42:16.327758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:92600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.138 [2024-10-01 20:42:16.327766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.138 [2024-10-01 20:42:16.327784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:92608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.138 [2024-10-01 20:42:16.327794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.138 [2024-10-01 20:42:16.327804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:92616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.138 [2024-10-01 20:42:16.327813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.138 [2024-10-01 20:42:16.327823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:92624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.138 [2024-10-01 20:42:16.327831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.138 [2024-10-01 20:42:16.327842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:92632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.138 [2024-10-01 20:42:16.327875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.138 [2024-10-01 20:42:16.327903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:92640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.138 [2024-10-01 20:42:16.327912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.139 [2024-10-01 20:42:16.327923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:92648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.139 [2024-10-01 20:42:16.327932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.139 [2024-10-01 20:42:16.327943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:92656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.139 [2024-10-01 20:42:16.327952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.139 [2024-10-01 20:42:16.327963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:92664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.139 [2024-10-01 20:42:16.327973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.139 [2024-10-01 20:42:16.327984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:92672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.139 [2024-10-01 20:42:16.327993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.139 [2024-10-01 20:42:16.328004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:92680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.139 [2024-10-01 20:42:16.328013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.139 [2024-10-01 20:42:16.328023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:92688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.139 [2024-10-01 20:42:16.328033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.139 [2024-10-01 20:42:16.328044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.139 [2024-10-01 20:42:16.328053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.139 [2024-10-01 20:42:16.328064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.139 [2024-10-01 20:42:16.328072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.139 [2024-10-01 20:42:16.328083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:92712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.139 [2024-10-01 20:42:16.328092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.139 [2024-10-01 20:42:16.328104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:92720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:21.139 [2024-10-01 20:42:16.328112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.139 [2024-10-01 20:42:16.328123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:92856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.139 [2024-10-01 20:42:16.328132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.139 [2024-10-01 20:42:16.328154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.139 [2024-10-01 20:42:16.328167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.139 [2024-10-01 20:42:16.328178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:92872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.139 [2024-10-01 20:42:16.328188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.139 [2024-10-01 20:42:16.328199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:92880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.139 [2024-10-01 20:42:16.328209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.139 [2024-10-01 20:42:16.328220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:92888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.139 [2024-10-01 20:42:16.328229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.139 [2024-10-01 20:42:16.328254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.139 [2024-10-01 20:42:16.328263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.139 [2024-10-01 20:42:16.328274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:92904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.139 [2024-10-01 20:42:16.328282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.139 [2024-10-01 20:42:16.328293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:92912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.139 [2024-10-01 20:42:16.328301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.139 [2024-10-01 20:42:16.328312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:92920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.139 [2024-10-01 20:42:16.328321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.139 [2024-10-01 20:42:16.328331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:92928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.139 [2024-10-01 20:42:16.328340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.139 [2024-10-01 20:42:16.328351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:92936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.139 [2024-10-01 20:42:16.328359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.139 [2024-10-01 20:42:16.328370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:92944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.139 [2024-10-01 20:42:16.328378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.139 [2024-10-01 20:42:16.328389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:92952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.139 [2024-10-01 20:42:16.328397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.139 [2024-10-01 20:42:16.328408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:92960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.139 [2024-10-01 20:42:16.328416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.139 [2024-10-01 20:42:16.328427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:92968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.139 [2024-10-01 20:42:16.328436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.139 [2024-10-01 20:42:16.328461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:92976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.139 [2024-10-01 20:42:16.328470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.139 [2024-10-01 20:42:16.328481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:92984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.139 [2024-10-01 20:42:16.328489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.139 [2024-10-01 20:42:16.328499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:92992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.139 [2024-10-01 20:42:16.328513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.139 [2024-10-01 20:42:16.328524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:93000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.139 [2024-10-01 20:42:16.328533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.139 [2024-10-01 20:42:16.328543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:93008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.139 [2024-10-01 20:42:16.328552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.139 [2024-10-01 20:42:16.328562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:93016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.139 [2024-10-01 20:42:16.328570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.139 [2024-10-01 20:42:16.328580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:93024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.140 [2024-10-01 20:42:16.328605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.140 [2024-10-01 20:42:16.328616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:93032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.140 [2024-10-01 20:42:16.328624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.140 [2024-10-01 20:42:16.328635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:93040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.140 [2024-10-01 20:42:16.328643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.140 [2024-10-01 20:42:16.328654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:93048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.140 [2024-10-01 20:42:16.328663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.140 [2024-10-01 20:42:16.328673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:93056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.140 [2024-10-01 20:42:16.328682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.140 [2024-10-01 20:42:16.328692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:93064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.140 [2024-10-01 20:42:16.328701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.140 [2024-10-01 20:42:16.328712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:93072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.140 [2024-10-01 20:42:16.328720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.140 [2024-10-01 20:42:16.328731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:93080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.140 [2024-10-01 20:42:16.328740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.140 [2024-10-01 20:42:16.328750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:93088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.140 [2024-10-01 20:42:16.328759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.140 [2024-10-01 20:42:16.328770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:93096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.140 [2024-10-01 20:42:16.328779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.140 [2024-10-01 20:42:16.328789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:93104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.140 [2024-10-01 20:42:16.328798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.140 [2024-10-01 20:42:16.328809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:93112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.140 [2024-10-01 20:42:16.328817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.140 [2024-10-01 20:42:16.328828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:93120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.140 [2024-10-01 20:42:16.328839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.140 [2024-10-01 20:42:16.328849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:93128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.140 [2024-10-01 20:42:16.328858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.140 [2024-10-01 20:42:16.328869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:93136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.140 [2024-10-01 20:42:16.328877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.140 [2024-10-01 20:42:16.328888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:93144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.140 [2024-10-01 20:42:16.328897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.140 [2024-10-01 20:42:16.328907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.140 [2024-10-01 20:42:16.328931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.140 [2024-10-01 20:42:16.328941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:93160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.140 [2024-10-01 20:42:16.328950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.140 [2024-10-01 20:42:16.328960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:93168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.140 [2024-10-01 20:42:16.328968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.140 [2024-10-01 20:42:16.328979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:93176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.140 [2024-10-01 20:42:16.328987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.140 [2024-10-01 20:42:16.328997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:93184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.140 [2024-10-01 20:42:16.329006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.140 [2024-10-01 20:42:16.329016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:93192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.140 [2024-10-01 20:42:16.329024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.140 [2024-10-01 20:42:16.329034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:93200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.140 [2024-10-01 20:42:16.329043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.140 [2024-10-01 20:42:16.329054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:93208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.140 [2024-10-01 20:42:16.329062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.140 [2024-10-01 20:42:16.329072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:93216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.140 [2024-10-01 20:42:16.329081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.140 [2024-10-01 20:42:16.329091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:93224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.140 [2024-10-01 20:42:16.329099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.140 [2024-10-01 20:42:16.329110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.140 [2024-10-01 20:42:16.329134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.140 [2024-10-01 20:42:16.329145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:93240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.140 [2024-10-01 20:42:16.329154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.141 [2024-10-01 20:42:16.329165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:93248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.141 [2024-10-01 20:42:16.329175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.141 [2024-10-01 20:42:16.329193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:93256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.141 [2024-10-01 20:42:16.329203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.141 [2024-10-01 20:42:16.329214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:93264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.141 [2024-10-01 20:42:16.329223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.141 [2024-10-01 20:42:16.329234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:93272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.141 [2024-10-01 20:42:16.329243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.141 [2024-10-01 20:42:16.329253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:93280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.141 [2024-10-01 20:42:16.329262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.141 [2024-10-01 20:42:16.329274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:93288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.141 [2024-10-01 20:42:16.329283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.141 [2024-10-01 20:42:16.329293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:93296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.141 [2024-10-01 20:42:16.329302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.141 [2024-10-01 20:42:16.329313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:93304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.141 [2024-10-01 20:42:16.329321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.141 [2024-10-01 20:42:16.329332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:93312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.141 [2024-10-01 20:42:16.329340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.141 [2024-10-01 20:42:16.329351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.141 [2024-10-01 20:42:16.329360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.141 [2024-10-01 20:42:16.329370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:93328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.141 [2024-10-01 20:42:16.329379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.141 [2024-10-01 20:42:16.329390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:93336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.141 [2024-10-01 20:42:16.329398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.141 [2024-10-01 20:42:16.329409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:93344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.141 [2024-10-01 20:42:16.329418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.141 [2024-10-01 20:42:16.329428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:93352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.141 [2024-10-01 20:42:16.329437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.141 [2024-10-01 20:42:16.329447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.141 [2024-10-01 20:42:16.329456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.141 [2024-10-01 20:42:16.329467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:93368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.141 [2024-10-01 20:42:16.329476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.141 [2024-10-01 20:42:16.329501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:93376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.141 [2024-10-01 20:42:16.329511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.141 [2024-10-01 20:42:16.329522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:93384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.141 [2024-10-01 20:42:16.329531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.141 [2024-10-01 20:42:16.329541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:93392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.141 [2024-10-01 20:42:16.329550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.141 [2024-10-01 20:42:16.329560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:93400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.141 [2024-10-01 20:42:16.329569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.141 [2024-10-01 20:42:16.329579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:93408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.141 [2024-10-01 20:42:16.329588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.141 [2024-10-01 20:42:16.329598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:93416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.141 [2024-10-01 20:42:16.329607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.141 [2024-10-01 20:42:16.329617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:93424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.141 [2024-10-01 20:42:16.329626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.141 [2024-10-01 20:42:16.329636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:93432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.141 [2024-10-01 20:42:16.329644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.141 [2024-10-01 20:42:16.329654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.141 [2024-10-01 20:42:16.329662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.141 [2024-10-01 20:42:16.329673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:93448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.141 [2024-10-01 20:42:16.329681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.141 [2024-10-01 20:42:16.329692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:93456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.141 [2024-10-01 20:42:16.329700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.141 [2024-10-01 20:42:16.329710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:93464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.141 [2024-10-01 20:42:16.329719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.141 [2024-10-01 20:42:16.329733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.141 [2024-10-01 20:42:16.329741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.141 [2024-10-01 20:42:16.329752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:93480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.141 [2024-10-01 20:42:16.329760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.142 [2024-10-01 20:42:16.329771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:93488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:21.142 [2024-10-01 20:42:16.329779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.142 [2024-10-01 20:42:16.329808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:21.142 [2024-10-01 20:42:16.329819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93496 len:8 PRP1 0x0 PRP2 0x0 00:43:21.142 [2024-10-01 20:42:16.329828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.142 [2024-10-01 20:42:16.329843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:21.142 [2024-10-01 20:42:16.329850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:21.142 [2024-10-01 20:42:16.329858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93504 len:8 PRP1 0x0 PRP2 0x0 00:43:21.142 [2024-10-01 20:42:16.329866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.142 [2024-10-01 20:42:16.329875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:21.142 [2024-10-01 20:42:16.329882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:21.142 [2024-10-01 20:42:16.329889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93512 len:8 PRP1 0x0 PRP2 0x0 00:43:21.142 [2024-10-01 20:42:16.329897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.142 [2024-10-01 20:42:16.329906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:21.142 [2024-10-01 20:42:16.329913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:21.142 [2024-10-01 20:42:16.329920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93520 len:8 PRP1 0x0 PRP2 0x0 00:43:21.142 [2024-10-01 20:42:16.329928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.142 [2024-10-01 20:42:16.329937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:21.142 [2024-10-01 20:42:16.329943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:21.142 [2024-10-01 20:42:16.329950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93528 len:8 PRP1 0x0 PRP2 0x0 00:43:21.142 [2024-10-01 20:42:16.329959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.142 [2024-10-01 20:42:16.329967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:21.142 [2024-10-01 20:42:16.329974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:21.142 [2024-10-01 20:42:16.329981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93536 len:8 PRP1 0x0 PRP2 0x0 00:43:21.142 [2024-10-01 20:42:16.329989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.142 [2024-10-01 20:42:16.329998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:21.142 [2024-10-01 20:42:16.330004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:21.142 [2024-10-01 20:42:16.330012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93544 len:8 PRP1 0x0 PRP2 0x0 00:43:21.142 [2024-10-01 20:42:16.330021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.142 [2024-10-01 20:42:16.330030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:21.142 [2024-10-01 20:42:16.330037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:21.142 [2024-10-01 20:42:16.330044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93552 len:8 PRP1 0x0 PRP2 0x0 00:43:21.142 [2024-10-01 20:42:16.330052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.142 [2024-10-01 20:42:16.330061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:21.142 [2024-10-01 20:42:16.330067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:21.142 [2024-10-01 20:42:16.330075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92728 len:8 PRP1 0x0 PRP2 0x0 00:43:21.142 [2024-10-01 20:42:16.330083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.142 [2024-10-01 20:42:16.330093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:21.142 [2024-10-01 20:42:16.330100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:21.142 [2024-10-01 20:42:16.330107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92736 len:8 PRP1 0x0 PRP2 0x0 00:43:21.142 [2024-10-01 20:42:16.330115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.142 [2024-10-01 20:42:16.330166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:21.142 [2024-10-01 20:42:16.330174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:21.142 [2024-10-01 20:42:16.330182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92744 len:8 PRP1 0x0 PRP2 0x0 00:43:21.142 [2024-10-01 20:42:16.330191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.142 [2024-10-01 20:42:16.330200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:21.142 [2024-10-01 20:42:16.330207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:21.142 [2024-10-01 20:42:16.330215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92752 len:8 PRP1 0x0 PRP2 0x0 00:43:21.142 [2024-10-01 20:42:16.330224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.142 [2024-10-01 20:42:16.330232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:21.142 [2024-10-01 20:42:16.330240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:21.142 [2024-10-01 20:42:16.330247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92760 len:8 PRP1 0x0 PRP2 0x0 00:43:21.142 [2024-10-01 20:42:16.330256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.142 [2024-10-01 20:42:16.330265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:21.142 [2024-10-01 20:42:16.330272] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:21.142 [2024-10-01 20:42:16.330279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92768 len:8 PRP1 0x0 PRP2 0x0 00:43:21.142 [2024-10-01 20:42:16.330288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.142 [2024-10-01 20:42:16.330297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:21.142 [2024-10-01 20:42:16.330304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:21.142 [2024-10-01 20:42:16.330312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92776 len:8 PRP1 0x0 PRP2 0x0 00:43:21.142 [2024-10-01 20:42:16.340312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.142 [2024-10-01 20:42:16.340362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:21.142 [2024-10-01 20:42:16.340371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:21.142 [2024-10-01 20:42:16.340380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92784 len:8 PRP1 0x0 PRP2 0x0 00:43:21.142 [2024-10-01 20:42:16.340389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.142 [2024-10-01 20:42:16.340397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:21.142 [2024-10-01 20:42:16.340404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:21.142 [2024-10-01 20:42:16.340411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92792 len:8 PRP1 0x0 PRP2 0x0 00:43:21.143 [2024-10-01 20:42:16.340419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.143 [2024-10-01 20:42:16.340460] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x64b750 was disconnected and freed. reset controller. 00:43:21.143 [2024-10-01 20:42:16.340586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:43:21.143 [2024-10-01 20:42:16.340603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.143 [2024-10-01 20:42:16.340615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:43:21.143 [2024-10-01 20:42:16.340623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.143 [2024-10-01 20:42:16.340632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:43:21.143 [2024-10-01 20:42:16.340641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.143 [2024-10-01 20:42:16.340650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:43:21.143 [2024-10-01 20:42:16.340658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:21.143 [2024-10-01 20:42:16.340667] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9870 is same with the state(6) to be set 00:43:21.143 [2024-10-01 20:42:16.340900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:21.143 [2024-10-01 20:42:16.340931] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d9870 (9): Bad file descriptor 00:43:21.143 [2024-10-01 20:42:16.341024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:21.143 [2024-10-01 20:42:16.341052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d9870 with addr=10.0.0.3, port=4420 00:43:21.143 [2024-10-01 20:42:16.341063] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9870 is same with the state(6) to be set 00:43:21.143 [2024-10-01 20:42:16.341080] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d9870 (9): Bad file descriptor 00:43:21.143 [2024-10-01 20:42:16.341096] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:21.143 [2024-10-01 20:42:16.341105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:21.143 [2024-10-01 20:42:16.341130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:21.143 [2024-10-01 20:42:16.341151] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:21.143 [2024-10-01 20:42:16.341162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:21.143 20:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:43:22.338 5783.50 IOPS, 22.59 MiB/s [2024-10-01 20:42:17.341249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:22.338 [2024-10-01 20:42:17.341324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d9870 with addr=10.0.0.3, port=4420 00:43:22.338 [2024-10-01 20:42:17.341337] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9870 is same with the state(6) to be set 00:43:22.338 [2024-10-01 20:42:17.341356] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d9870 (9): Bad file descriptor 00:43:22.338 [2024-10-01 20:42:17.341371] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:22.338 [2024-10-01 20:42:17.341380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:22.338 [2024-10-01 20:42:17.341389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:22.338 [2024-10-01 20:42:17.341410] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:22.338 [2024-10-01 20:42:17.341420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:22.338 20:42:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:43:22.597 [2024-10-01 20:42:17.613858] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:43:22.597 20:42:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 96968 00:43:23.164 3855.67 IOPS, 15.06 MiB/s [2024-10-01 20:42:18.354053] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:43:30.155 2891.75 IOPS, 11.30 MiB/s 4109.80 IOPS, 16.05 MiB/s 5215.83 IOPS, 20.37 MiB/s 5973.29 IOPS, 23.33 MiB/s 6547.25 IOPS, 25.58 MiB/s 6981.00 IOPS, 27.27 MiB/s 7349.40 IOPS, 28.71 MiB/s 00:43:30.155 Latency(us) 00:43:30.155 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:30.155 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:43:30.155 Verification LBA range: start 0x0 length 0x4000 00:43:30.155 NVMe0n1 : 10.01 7357.07 28.74 0.00 0.00 17374.79 1765.00 3035150.89 00:43:30.155 =================================================================================================================== 00:43:30.155 Total : 7357.07 28.74 0.00 0.00 17374.79 1765.00 3035150.89 00:43:30.155 { 00:43:30.155 "results": [ 00:43:30.155 { 00:43:30.155 "job": "NVMe0n1", 00:43:30.155 "core_mask": "0x4", 00:43:30.155 "workload": "verify", 00:43:30.155 "status": "finished", 00:43:30.155 "verify_range": { 00:43:30.155 "start": 0, 00:43:30.155 "length": 16384 00:43:30.155 }, 00:43:30.155 "queue_depth": 128, 00:43:30.155 "io_size": 4096, 00:43:30.155 "runtime": 10.011865, 00:43:30.155 "iops": 7357.070835453734, 00:43:30.155 "mibps": 28.738557950991147, 00:43:30.155 "io_failed": 0, 00:43:30.155 "io_timeout": 0, 00:43:30.155 "avg_latency_us": 17374.7891755262, 00:43:30.155 "min_latency_us": 1765.0036363636364, 00:43:30.155 "max_latency_us": 3035150.8945454545 00:43:30.155 } 00:43:30.155 ], 00:43:30.155 "core_count": 1 00:43:30.155 } 00:43:30.155 20:42:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=97090 00:43:30.155 20:42:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:43:30.155 20:42:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:43:30.155 Running I/O for 10 seconds... 00:43:31.091 20:42:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:43:31.353 10332.00 IOPS, 40.36 MiB/s [2024-10-01 20:42:26.503432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.353 [2024-10-01 20:42:26.503530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.353 [2024-10-01 20:42:26.503541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.353 [2024-10-01 20:42:26.503548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.353 [2024-10-01 20:42:26.503555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.353 [2024-10-01 20:42:26.503563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.353 [2024-10-01 20:42:26.503571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.353 [2024-10-01 20:42:26.503578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.353 [2024-10-01 20:42:26.503585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.503998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.504006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.504014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.504022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.504030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.504038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.504045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.504054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.504062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.504070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.504078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.504086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.504093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.504102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.504111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.504119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.504139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.504148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.504156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.504165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.504173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.504181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.504189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.504197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.504205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.504213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.354 [2024-10-01 20:42:26.504221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21480 is same with the state(6) to be set 00:43:31.355 [2024-10-01 20:42:26.504973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.355 [2024-10-01 20:42:26.505015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.355 [2024-10-01 20:42:26.505036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.355 [2024-10-01 20:42:26.505046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.355 [2024-10-01 20:42:26.505058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.355 [2024-10-01 20:42:26.505067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.355 [2024-10-01 20:42:26.505078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.355 [2024-10-01 20:42:26.505087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.355 [2024-10-01 20:42:26.505098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.355 [2024-10-01 20:42:26.505148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.355 [2024-10-01 20:42:26.505162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.355 [2024-10-01 20:42:26.505172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.355 [2024-10-01 20:42:26.505184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.355 [2024-10-01 20:42:26.505192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.355 [2024-10-01 20:42:26.505203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.355 [2024-10-01 20:42:26.505212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.355 [2024-10-01 20:42:26.505223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.355 [2024-10-01 20:42:26.505233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.355 [2024-10-01 20:42:26.505244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.355 [2024-10-01 20:42:26.505253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.355 [2024-10-01 20:42:26.505264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.355 [2024-10-01 20:42:26.505273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.355 [2024-10-01 20:42:26.505284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.355 [2024-10-01 20:42:26.505293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.355 [2024-10-01 20:42:26.505304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:94048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.355 [2024-10-01 20:42:26.505313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.355 [2024-10-01 20:42:26.505324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.355 [2024-10-01 20:42:26.505333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.355 [2024-10-01 20:42:26.505344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.356 [2024-10-01 20:42:26.505353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.356 [2024-10-01 20:42:26.505364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.356 [2024-10-01 20:42:26.505373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.356 [2024-10-01 20:42:26.505385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.356 [2024-10-01 20:42:26.505395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.356 [2024-10-01 20:42:26.505406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.356 [2024-10-01 20:42:26.505415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.356 [2024-10-01 20:42:26.505426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.356 [2024-10-01 20:42:26.505435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.356 [2024-10-01 20:42:26.505446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.356 [2024-10-01 20:42:26.505455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.356 [2024-10-01 20:42:26.505466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.356 [2024-10-01 20:42:26.505475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.356 [2024-10-01 20:42:26.505486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.356 [2024-10-01 20:42:26.505510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.356 [2024-10-01 20:42:26.505520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.356 [2024-10-01 20:42:26.505529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.356 [2024-10-01 20:42:26.505540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.356 [2024-10-01 20:42:26.505548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.356 [2024-10-01 20:42:26.505559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.356 [2024-10-01 20:42:26.505568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.356 [2024-10-01 20:42:26.505579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.356 [2024-10-01 20:42:26.505588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.356 [2024-10-01 20:42:26.505598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.356 [2024-10-01 20:42:26.505607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.356 [2024-10-01 20:42:26.505618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.356 [2024-10-01 20:42:26.505626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.356 [2024-10-01 20:42:26.505636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.356 [2024-10-01 20:42:26.505645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.356 [2024-10-01 20:42:26.505656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:94184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.356 [2024-10-01 20:42:26.505664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.356 [2024-10-01 20:42:26.505675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:94192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.356 [2024-10-01 20:42:26.505683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.356 [2024-10-01 20:42:26.505694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:94200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.356 [2024-10-01 20:42:26.505702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.356 [2024-10-01 20:42:26.505713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:94208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.356 [2024-10-01 20:42:26.505723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.356 [2024-10-01 20:42:26.505734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:94216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.356 [2024-10-01 20:42:26.505743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.356 [2024-10-01 20:42:26.505755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.356 [2024-10-01 20:42:26.505763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.356 [2024-10-01 20:42:26.505775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.356 [2024-10-01 20:42:26.505784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.356 [2024-10-01 20:42:26.505795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.356 [2024-10-01 20:42:26.505803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.356 [2024-10-01 20:42:26.505814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.356 [2024-10-01 20:42:26.505823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.356 [2024-10-01 20:42:26.505834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.356 [2024-10-01 20:42:26.505842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.356 [2024-10-01 20:42:26.505853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:94264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.356 [2024-10-01 20:42:26.505862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.356 [2024-10-01 20:42:26.505872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:94272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.356 [2024-10-01 20:42:26.505881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.356 [2024-10-01 20:42:26.505892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:94280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.356 [2024-10-01 20:42:26.505901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.356 [2024-10-01 20:42:26.505912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.356 [2024-10-01 20:42:26.505920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.356 [2024-10-01 20:42:26.505931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:94296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.356 [2024-10-01 20:42:26.505939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.356 [2024-10-01 20:42:26.505950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.356 [2024-10-01 20:42:26.505958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.356 [2024-10-01 20:42:26.505969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:94312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.356 [2024-10-01 20:42:26.505977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.356 [2024-10-01 20:42:26.505988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.356 [2024-10-01 20:42:26.505997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.356 [2024-10-01 20:42:26.506007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:94328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.357 [2024-10-01 20:42:26.506016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.357 [2024-10-01 20:42:26.506027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.357 [2024-10-01 20:42:26.506036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.357 [2024-10-01 20:42:26.506047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:94344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.357 [2024-10-01 20:42:26.506056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.357 [2024-10-01 20:42:26.506066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:94352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.357 [2024-10-01 20:42:26.506080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.357 [2024-10-01 20:42:26.506091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.357 [2024-10-01 20:42:26.506100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.357 [2024-10-01 20:42:26.506111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:94368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.357 [2024-10-01 20:42:26.506136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.357 [2024-10-01 20:42:26.506157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:94376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.357 [2024-10-01 20:42:26.506166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.357 [2024-10-01 20:42:26.506177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.357 [2024-10-01 20:42:26.506187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.357 [2024-10-01 20:42:26.506198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.357 [2024-10-01 20:42:26.506207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.357 [2024-10-01 20:42:26.506217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.357 [2024-10-01 20:42:26.506226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.357 [2024-10-01 20:42:26.506237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.357 [2024-10-01 20:42:26.506246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.357 [2024-10-01 20:42:26.506257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.357 [2024-10-01 20:42:26.506267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.357 [2024-10-01 20:42:26.506278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.357 [2024-10-01 20:42:26.506288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.357 [2024-10-01 20:42:26.506299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.357 [2024-10-01 20:42:26.506308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.357 [2024-10-01 20:42:26.506319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:94384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.357 [2024-10-01 20:42:26.506328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.357 [2024-10-01 20:42:26.506339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.357 [2024-10-01 20:42:26.506348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.357 [2024-10-01 20:42:26.506359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:94400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.357 [2024-10-01 20:42:26.506368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.357 [2024-10-01 20:42:26.506380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.357 [2024-10-01 20:42:26.506389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.357 [2024-10-01 20:42:26.506400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.357 [2024-10-01 20:42:26.506409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.357 [2024-10-01 20:42:26.506420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:94424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.357 [2024-10-01 20:42:26.506431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.357 [2024-10-01 20:42:26.506442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:94432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.357 [2024-10-01 20:42:26.506451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.357 [2024-10-01 20:42:26.506462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.357 [2024-10-01 20:42:26.506471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.357 [2024-10-01 20:42:26.506482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:94448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:31.357 [2024-10-01 20:42:26.506491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.357 [2024-10-01 20:42:26.506517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.357 [2024-10-01 20:42:26.506525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.357 [2024-10-01 20:42:26.506536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.357 [2024-10-01 20:42:26.506545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.357 [2024-10-01 20:42:26.506555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.357 [2024-10-01 20:42:26.506564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.357 [2024-10-01 20:42:26.506574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.357 [2024-10-01 20:42:26.506583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.357 [2024-10-01 20:42:26.506593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.357 [2024-10-01 20:42:26.506602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.357 [2024-10-01 20:42:26.506613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.357 [2024-10-01 20:42:26.506621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.357 [2024-10-01 20:42:26.506632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.357 [2024-10-01 20:42:26.506640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.357 [2024-10-01 20:42:26.506651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.357 [2024-10-01 20:42:26.506660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.357 [2024-10-01 20:42:26.506670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.357 [2024-10-01 20:42:26.506678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.357 [2024-10-01 20:42:26.506689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.357 [2024-10-01 20:42:26.506698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.357 [2024-10-01 20:42:26.506709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.357 [2024-10-01 20:42:26.506718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.358 [2024-10-01 20:42:26.506728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.358 [2024-10-01 20:42:26.506736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.358 [2024-10-01 20:42:26.506747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.358 [2024-10-01 20:42:26.506757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.358 [2024-10-01 20:42:26.506768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.358 [2024-10-01 20:42:26.506777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.358 [2024-10-01 20:42:26.506787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.358 [2024-10-01 20:42:26.506796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.358 [2024-10-01 20:42:26.506807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.358 [2024-10-01 20:42:26.506815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.358 [2024-10-01 20:42:26.506826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.358 [2024-10-01 20:42:26.506835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.358 [2024-10-01 20:42:26.506845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.358 [2024-10-01 20:42:26.506854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.358 [2024-10-01 20:42:26.506865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.358 [2024-10-01 20:42:26.506873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.358 [2024-10-01 20:42:26.506884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.358 [2024-10-01 20:42:26.506892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.358 [2024-10-01 20:42:26.506903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.358 [2024-10-01 20:42:26.506912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.358 [2024-10-01 20:42:26.506922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.358 [2024-10-01 20:42:26.506931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.358 [2024-10-01 20:42:26.506941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.358 [2024-10-01 20:42:26.506949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.358 [2024-10-01 20:42:26.506960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.358 [2024-10-01 20:42:26.506969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.358 [2024-10-01 20:42:26.506979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.358 [2024-10-01 20:42:26.506988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.358 [2024-10-01 20:42:26.506998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.358 [2024-10-01 20:42:26.507007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.358 [2024-10-01 20:42:26.507018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.358 [2024-10-01 20:42:26.507027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.358 [2024-10-01 20:42:26.507037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.358 [2024-10-01 20:42:26.507046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.358 [2024-10-01 20:42:26.507056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.358 [2024-10-01 20:42:26.507067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.358 [2024-10-01 20:42:26.507077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.358 [2024-10-01 20:42:26.507086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.358 [2024-10-01 20:42:26.507096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.358 [2024-10-01 20:42:26.507105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.358 [2024-10-01 20:42:26.507132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:94760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.358 [2024-10-01 20:42:26.507150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.358 [2024-10-01 20:42:26.507162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.358 [2024-10-01 20:42:26.507171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.358 [2024-10-01 20:42:26.507182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:94776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.358 [2024-10-01 20:42:26.507191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.358 [2024-10-01 20:42:26.507202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.358 [2024-10-01 20:42:26.507211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.358 [2024-10-01 20:42:26.507222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.358 [2024-10-01 20:42:26.507231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.358 [2024-10-01 20:42:26.507242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.358 [2024-10-01 20:42:26.507251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.358 [2024-10-01 20:42:26.507261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.358 [2024-10-01 20:42:26.507270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.358 [2024-10-01 20:42:26.507281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.358 [2024-10-01 20:42:26.507290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.358 [2024-10-01 20:42:26.507301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.358 [2024-10-01 20:42:26.507309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.358 [2024-10-01 20:42:26.507320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.358 [2024-10-01 20:42:26.507329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.358 [2024-10-01 20:42:26.507340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.358 [2024-10-01 20:42:26.507348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.358 [2024-10-01 20:42:26.507360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:94848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.358 [2024-10-01 20:42:26.507369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.358 [2024-10-01 20:42:26.507380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:94856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.358 [2024-10-01 20:42:26.507389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.358 [2024-10-01 20:42:26.507400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.358 [2024-10-01 20:42:26.507411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.359 [2024-10-01 20:42:26.507423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.359 [2024-10-01 20:42:26.507432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.359 [2024-10-01 20:42:26.507443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.359 [2024-10-01 20:42:26.507451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.359 [2024-10-01 20:42:26.507462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.359 [2024-10-01 20:42:26.507471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.359 [2024-10-01 20:42:26.507482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.359 [2024-10-01 20:42:26.507491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.359 [2024-10-01 20:42:26.507517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.359 [2024-10-01 20:42:26.507525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.359 [2024-10-01 20:42:26.507535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.359 [2024-10-01 20:42:26.507544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.359 [2024-10-01 20:42:26.507555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.359 [2024-10-01 20:42:26.507564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.359 [2024-10-01 20:42:26.507574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.359 [2024-10-01 20:42:26.507583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.359 [2024-10-01 20:42:26.507594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.359 [2024-10-01 20:42:26.507602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.359 [2024-10-01 20:42:26.507613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.359 [2024-10-01 20:42:26.507621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.359 [2024-10-01 20:42:26.507632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.359 [2024-10-01 20:42:26.507640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.359 [2024-10-01 20:42:26.507651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:31.359 [2024-10-01 20:42:26.507659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.359 [2024-10-01 20:42:26.507689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:31.359 [2024-10-01 20:42:26.507699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:31.359 [2024-10-01 20:42:26.507710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94968 len:8 PRP1 0x0 PRP2 0x0 00:43:31.359 [2024-10-01 20:42:26.507719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:31.359 [2024-10-01 20:42:26.507759] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x649e90 was disconnected and freed. reset controller. 00:43:31.359 [2024-10-01 20:42:26.508036] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:31.359 [2024-10-01 20:42:26.508129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d9870 (9): Bad file descriptor 00:43:31.359 [2024-10-01 20:42:26.508231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:31.359 [2024-10-01 20:42:26.508262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d9870 with addr=10.0.0.3, port=4420 00:43:31.359 [2024-10-01 20:42:26.508273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9870 is same with the state(6) to be set 00:43:31.359 [2024-10-01 20:42:26.508291] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d9870 (9): Bad file descriptor 00:43:31.359 [2024-10-01 20:42:26.508306] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:31.359 [2024-10-01 20:42:26.508315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:31.359 [2024-10-01 20:42:26.508326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:31.359 [2024-10-01 20:42:26.508346] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:31.359 [2024-10-01 20:42:26.508358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:31.359 20:42:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:43:32.296 5872.00 IOPS, 22.94 MiB/s [2024-10-01 20:42:27.508453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:32.296 [2024-10-01 20:42:27.508681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d9870 with addr=10.0.0.3, port=4420 00:43:32.296 [2024-10-01 20:42:27.508704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9870 is same with the state(6) to be set 00:43:32.296 [2024-10-01 20:42:27.508730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d9870 (9): Bad file descriptor 00:43:32.296 [2024-10-01 20:42:27.508747] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:32.296 [2024-10-01 20:42:27.508757] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:32.296 [2024-10-01 20:42:27.508768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:32.296 [2024-10-01 20:42:27.508792] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:32.296 [2024-10-01 20:42:27.508803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:33.490 3914.67 IOPS, 15.29 MiB/s [2024-10-01 20:42:28.508890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:33.490 [2024-10-01 20:42:28.508961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d9870 with addr=10.0.0.3, port=4420 00:43:33.490 [2024-10-01 20:42:28.508974] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9870 is same with the state(6) to be set 00:43:33.490 [2024-10-01 20:42:28.508992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d9870 (9): Bad file descriptor 00:43:33.490 [2024-10-01 20:42:28.509007] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:33.490 [2024-10-01 20:42:28.509015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:33.490 [2024-10-01 20:42:28.509024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:33.490 [2024-10-01 20:42:28.509043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:33.490 [2024-10-01 20:42:28.509053] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.424 2936.00 IOPS, 11.47 MiB/s [2024-10-01 20:42:29.511933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.424 [2024-10-01 20:42:29.512008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d9870 with addr=10.0.0.3, port=4420 00:43:34.424 [2024-10-01 20:42:29.512022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9870 is same with the state(6) to be set 00:43:34.424 [2024-10-01 20:42:29.512325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d9870 (9): Bad file descriptor 00:43:34.424 [2024-10-01 20:42:29.512563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.424 [2024-10-01 20:42:29.512584] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.424 [2024-10-01 20:42:29.512594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.424 [2024-10-01 20:42:29.516219] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.424 [2024-10-01 20:42:29.516278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.424 20:42:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:43:34.683 [2024-10-01 20:42:29.783227] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:43:34.683 20:42:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 97090 00:43:35.508 2348.80 IOPS, 9.18 MiB/s [2024-10-01 20:42:30.552463] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:43:40.235 3391.17 IOPS, 13.25 MiB/s 4426.57 IOPS, 17.29 MiB/s 5211.50 IOPS, 20.36 MiB/s 5804.89 IOPS, 22.68 MiB/s 6236.30 IOPS, 24.36 MiB/s 00:43:40.235 Latency(us) 00:43:40.235 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:40.235 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:43:40.235 Verification LBA range: start 0x0 length 0x4000 00:43:40.235 NVMe0n1 : 10.01 6242.73 24.39 4122.69 0.00 12323.98 584.61 3019898.88 00:43:40.235 =================================================================================================================== 00:43:40.235 Total : 6242.73 24.39 4122.69 0.00 12323.98 0.00 3019898.88 00:43:40.235 { 00:43:40.235 "results": [ 00:43:40.235 { 00:43:40.235 "job": "NVMe0n1", 00:43:40.235 "core_mask": "0x4", 00:43:40.235 "workload": "verify", 00:43:40.235 "status": "finished", 00:43:40.236 "verify_range": { 00:43:40.236 "start": 0, 00:43:40.236 "length": 16384 00:43:40.236 }, 00:43:40.236 "queue_depth": 128, 00:43:40.236 "io_size": 4096, 00:43:40.236 "runtime": 10.008278, 00:43:40.236 "iops": 6242.732266230015, 00:43:40.236 "mibps": 24.385672914960995, 00:43:40.236 "io_failed": 41261, 00:43:40.236 "io_timeout": 0, 00:43:40.236 "avg_latency_us": 12323.977850640587, 00:43:40.236 "min_latency_us": 584.6109090909091, 00:43:40.236 "max_latency_us": 3019898.88 00:43:40.236 } 00:43:40.236 ], 00:43:40.236 "core_count": 1 00:43:40.236 } 00:43:40.236 20:42:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 96919 00:43:40.236 20:42:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 96919 ']' 00:43:40.236 20:42:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 96919 00:43:40.236 20:42:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:43:40.236 20:42:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:40.236 20:42:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96919 00:43:40.236 killing process with pid 96919 00:43:40.236 Received shutdown signal, test time was about 10.000000 seconds 00:43:40.236 00:43:40.236 Latency(us) 00:43:40.236 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:40.236 =================================================================================================================== 00:43:40.236 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:40.236 20:42:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:43:40.236 20:42:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:43:40.236 20:42:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96919' 00:43:40.236 20:42:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 96919 00:43:40.236 20:42:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 96919 00:43:40.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:43:40.494 20:42:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=97211 00:43:40.494 20:42:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:43:40.494 20:42:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 97211 /var/tmp/bdevperf.sock 00:43:40.494 20:42:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 97211 ']' 00:43:40.494 20:42:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:43:40.494 20:42:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:40.494 20:42:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:43:40.494 20:42:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:40.494 20:42:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:43:40.494 [2024-10-01 20:42:35.683533] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:43:40.494 [2024-10-01 20:42:35.683638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97211 ] 00:43:40.753 [2024-10-01 20:42:35.819502] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:40.753 [2024-10-01 20:42:35.873186] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:43:41.688 20:42:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:41.688 20:42:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:43:41.688 20:42:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97211 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:43:41.688 20:42:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=97239 00:43:41.688 20:42:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:43:41.946 20:42:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:43:42.204 NVMe0n1 00:43:42.204 20:42:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:43:42.204 20:42:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=97298 00:43:42.204 20:42:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:43:42.204 Running I/O for 10 seconds... 00:43:43.140 20:42:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:43:43.401 20277.00 IOPS, 79.21 MiB/s [2024-10-01 20:42:38.524426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25030 is same with the state(6) to be set 00:43:43.401 [2024-10-01 20:42:38.524512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25030 is same with the state(6) to be set 00:43:43.401 [2024-10-01 20:42:38.524522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25030 is same with the state(6) to be set 00:43:43.401 [2024-10-01 20:42:38.524529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25030 is same with the state(6) to be set 00:43:43.401 [2024-10-01 20:42:38.524536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25030 is same with the state(6) to be set 00:43:43.401 [2024-10-01 20:42:38.524544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25030 is same with the state(6) to be set 00:43:43.401 [2024-10-01 20:42:38.524551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25030 is same with the state(6) to be set 00:43:43.401 [2024-10-01 20:42:38.524558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25030 is same with the state(6) to be set 00:43:43.401 [2024-10-01 20:42:38.524566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25030 is same with the state(6) to be set 00:43:43.401 [2024-10-01 20:42:38.524572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25030 is same with the state(6) to be set 00:43:43.401 [2024-10-01 20:42:38.524579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25030 is same with the state(6) to be set 00:43:43.401 [2024-10-01 20:42:38.524586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25030 is same with the state(6) to be set 00:43:43.401 [2024-10-01 20:42:38.524593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25030 is same with the state(6) to be set 00:43:43.401 [2024-10-01 20:42:38.524600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25030 is same with the state(6) to be set 00:43:43.401 [2024-10-01 20:42:38.524607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25030 is same with the state(6) to be set 00:43:43.401 [2024-10-01 20:42:38.524614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25030 is same with the state(6) to be set 00:43:43.401 [2024-10-01 20:42:38.524621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25030 is same with the state(6) to be set 00:43:43.401 [2024-10-01 20:42:38.524628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25030 is same with the state(6) to be set 00:43:43.401 [2024-10-01 20:42:38.524635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25030 is same with the state(6) to be set 00:43:43.401 [2024-10-01 20:42:38.524642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25030 is same with the state(6) to be set 00:43:43.401 [2024-10-01 20:42:38.524649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25030 is same with the state(6) to be set 00:43:43.401 [2024-10-01 20:42:38.524656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25030 is same with the state(6) to be set 00:43:43.401 [2024-10-01 20:42:38.524663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25030 is same with the state(6) to be set 00:43:43.401 [2024-10-01 20:42:38.524670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa25030 is same with the state(6) to be set 00:43:43.401 [2024-10-01 20:42:38.525262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:72048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.401 [2024-10-01 20:42:38.525307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.401 [2024-10-01 20:42:38.525329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.401 [2024-10-01 20:42:38.525340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.401 [2024-10-01 20:42:38.525352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.401 [2024-10-01 20:42:38.525362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.401 [2024-10-01 20:42:38.525373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:122472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.401 [2024-10-01 20:42:38.525383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.401 [2024-10-01 20:42:38.525394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.401 [2024-10-01 20:42:38.525404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.401 [2024-10-01 20:42:38.525416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.401 [2024-10-01 20:42:38.525425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.401 [2024-10-01 20:42:38.525436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:88456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.401 [2024-10-01 20:42:38.525446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.401 [2024-10-01 20:42:38.525457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.401 [2024-10-01 20:42:38.525467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.401 [2024-10-01 20:42:38.525478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.401 [2024-10-01 20:42:38.525488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.401 [2024-10-01 20:42:38.525514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:116264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.401 [2024-10-01 20:42:38.525537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.401 [2024-10-01 20:42:38.525563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.401 [2024-10-01 20:42:38.525571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.401 [2024-10-01 20:42:38.525581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.401 [2024-10-01 20:42:38.525590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.401 [2024-10-01 20:42:38.525600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:111552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.401 [2024-10-01 20:42:38.525609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.401 [2024-10-01 20:42:38.525619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:127552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.401 [2024-10-01 20:42:38.525627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.401 [2024-10-01 20:42:38.525637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:114352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.401 [2024-10-01 20:42:38.525646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.402 [2024-10-01 20:42:38.525656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:92696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.402 [2024-10-01 20:42:38.525665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.402 [2024-10-01 20:42:38.525675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.402 [2024-10-01 20:42:38.525685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.402 [2024-10-01 20:42:38.525695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:35520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.402 [2024-10-01 20:42:38.525704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.402 [2024-10-01 20:42:38.525714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.402 [2024-10-01 20:42:38.525723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.402 [2024-10-01 20:42:38.525734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.402 [2024-10-01 20:42:38.525743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.402 [2024-10-01 20:42:38.525755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.402 [2024-10-01 20:42:38.525764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.402 [2024-10-01 20:42:38.525774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:84480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.402 [2024-10-01 20:42:38.525783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.402 [2024-10-01 20:42:38.525793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:39496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.402 [2024-10-01 20:42:38.525801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.402 [2024-10-01 20:42:38.525812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:81232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.402 [2024-10-01 20:42:38.525820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.402 [2024-10-01 20:42:38.525830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:42232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.402 [2024-10-01 20:42:38.525838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.402 [2024-10-01 20:42:38.525849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:29680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.402 [2024-10-01 20:42:38.525857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.402 [2024-10-01 20:42:38.525868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.402 [2024-10-01 20:42:38.525876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.402 [2024-10-01 20:42:38.525886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.402 [2024-10-01 20:42:38.525895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.402 [2024-10-01 20:42:38.525905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:47424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.402 [2024-10-01 20:42:38.525913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.402 [2024-10-01 20:42:38.525924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:91624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.402 [2024-10-01 20:42:38.525932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.402 [2024-10-01 20:42:38.525943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:31832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.402 [2024-10-01 20:42:38.525951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.402 [2024-10-01 20:42:38.525961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.402 [2024-10-01 20:42:38.525969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.402 [2024-10-01 20:42:38.525980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.402 [2024-10-01 20:42:38.525989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.402 [2024-10-01 20:42:38.525999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:124640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.402 [2024-10-01 20:42:38.526008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.402 [2024-10-01 20:42:38.526018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:28152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.402 [2024-10-01 20:42:38.526027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.402 [2024-10-01 20:42:38.526037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:90416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.402 [2024-10-01 20:42:38.526045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.402 [2024-10-01 20:42:38.526055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.402 [2024-10-01 20:42:38.526064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.402 [2024-10-01 20:42:38.526074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.402 [2024-10-01 20:42:38.526083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.402 [2024-10-01 20:42:38.526092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:109856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.402 [2024-10-01 20:42:38.526101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.402 [2024-10-01 20:42:38.526111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:28064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.402 [2024-10-01 20:42:38.526135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.402 [2024-10-01 20:42:38.526146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:74768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.402 [2024-10-01 20:42:38.526170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.402 [2024-10-01 20:42:38.526206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:114880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.402 [2024-10-01 20:42:38.526217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.402 [2024-10-01 20:42:38.526229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.402 [2024-10-01 20:42:38.526238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.402 [2024-10-01 20:42:38.526249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.402 [2024-10-01 20:42:38.526258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.402 [2024-10-01 20:42:38.526269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:67664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.402 [2024-10-01 20:42:38.526295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.402 [2024-10-01 20:42:38.526306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:34536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.402 [2024-10-01 20:42:38.526316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.402 [2024-10-01 20:42:38.526327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:68784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.402 [2024-10-01 20:42:38.526337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.402 [2024-10-01 20:42:38.526348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:61688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.402 [2024-10-01 20:42:38.526357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.402 [2024-10-01 20:42:38.526368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:101216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.402 [2024-10-01 20:42:38.526378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.403 [2024-10-01 20:42:38.526389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.403 [2024-10-01 20:42:38.526399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.403 [2024-10-01 20:42:38.526411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:105240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.403 [2024-10-01 20:42:38.526420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.403 [2024-10-01 20:42:38.526431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.403 [2024-10-01 20:42:38.526441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.403 [2024-10-01 20:42:38.526452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.403 [2024-10-01 20:42:38.526461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.403 [2024-10-01 20:42:38.526472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:54848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.403 [2024-10-01 20:42:38.526482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.403 [2024-10-01 20:42:38.526493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:54048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.403 [2024-10-01 20:42:38.526502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.403 [2024-10-01 20:42:38.526529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:124912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.403 [2024-10-01 20:42:38.526553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.403 [2024-10-01 20:42:38.526563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.403 [2024-10-01 20:42:38.526572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.403 [2024-10-01 20:42:38.526599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.403 [2024-10-01 20:42:38.526607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.403 [2024-10-01 20:42:38.526617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:107920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.403 [2024-10-01 20:42:38.526626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.403 [2024-10-01 20:42:38.526636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:115344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.403 [2024-10-01 20:42:38.526644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.403 [2024-10-01 20:42:38.526654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:45544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.403 [2024-10-01 20:42:38.526663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.403 [2024-10-01 20:42:38.526673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.403 [2024-10-01 20:42:38.526681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.403 [2024-10-01 20:42:38.526692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:102568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.403 [2024-10-01 20:42:38.526700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.403 [2024-10-01 20:42:38.526710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.403 [2024-10-01 20:42:38.526718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.403 [2024-10-01 20:42:38.526729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:90176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.403 [2024-10-01 20:42:38.526737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.403 [2024-10-01 20:42:38.526748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:90872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.403 [2024-10-01 20:42:38.526757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.403 [2024-10-01 20:42:38.526767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:127008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.403 [2024-10-01 20:42:38.526776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.403 [2024-10-01 20:42:38.526786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:115408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.403 [2024-10-01 20:42:38.526794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.403 [2024-10-01 20:42:38.526805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.403 [2024-10-01 20:42:38.526813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.403 [2024-10-01 20:42:38.526824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.403 [2024-10-01 20:42:38.526832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.403 [2024-10-01 20:42:38.526842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:126280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.403 [2024-10-01 20:42:38.526850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.403 [2024-10-01 20:42:38.526861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:107824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.403 [2024-10-01 20:42:38.526869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.403 [2024-10-01 20:42:38.526879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:102920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.403 [2024-10-01 20:42:38.526888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.403 [2024-10-01 20:42:38.526898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:56232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.403 [2024-10-01 20:42:38.526907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.403 [2024-10-01 20:42:38.526917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:52968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.403 [2024-10-01 20:42:38.526926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.403 [2024-10-01 20:42:38.526936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.403 [2024-10-01 20:42:38.526944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.403 [2024-10-01 20:42:38.526954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.403 [2024-10-01 20:42:38.526963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.403 [2024-10-01 20:42:38.526988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.403 [2024-10-01 20:42:38.526997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.403 [2024-10-01 20:42:38.527022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.403 [2024-10-01 20:42:38.527048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.403 [2024-10-01 20:42:38.527059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:94648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.403 [2024-10-01 20:42:38.527068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.403 [2024-10-01 20:42:38.527078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:108768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.403 [2024-10-01 20:42:38.527087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.403 [2024-10-01 20:42:38.527097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.403 [2024-10-01 20:42:38.527106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.403 [2024-10-01 20:42:38.527117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:85520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.403 [2024-10-01 20:42:38.527126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.404 [2024-10-01 20:42:38.527153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:45480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.404 [2024-10-01 20:42:38.527163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.404 [2024-10-01 20:42:38.527174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.404 [2024-10-01 20:42:38.527183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.404 [2024-10-01 20:42:38.527194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.404 [2024-10-01 20:42:38.527219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.404 [2024-10-01 20:42:38.527230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.404 [2024-10-01 20:42:38.527239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.404 [2024-10-01 20:42:38.527260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:106080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.404 [2024-10-01 20:42:38.527270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.404 [2024-10-01 20:42:38.527281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:107248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.404 [2024-10-01 20:42:38.527291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.404 [2024-10-01 20:42:38.527306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:31688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.404 [2024-10-01 20:42:38.527316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.404 [2024-10-01 20:42:38.527327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:70056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.404 [2024-10-01 20:42:38.527336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.404 [2024-10-01 20:42:38.527347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:52880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.404 [2024-10-01 20:42:38.527357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.404 [2024-10-01 20:42:38.527368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:55456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.404 [2024-10-01 20:42:38.527379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.404 [2024-10-01 20:42:38.527390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:46072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.404 [2024-10-01 20:42:38.527400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.404 [2024-10-01 20:42:38.527411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:46976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.404 [2024-10-01 20:42:38.527420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.404 [2024-10-01 20:42:38.527431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:55880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.404 [2024-10-01 20:42:38.527441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.404 [2024-10-01 20:42:38.527452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.404 [2024-10-01 20:42:38.527461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.404 [2024-10-01 20:42:38.527474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:103776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.404 [2024-10-01 20:42:38.527484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.404 [2024-10-01 20:42:38.527495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:87520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.404 [2024-10-01 20:42:38.527505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.404 [2024-10-01 20:42:38.527516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:60224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.404 [2024-10-01 20:42:38.527525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.404 [2024-10-01 20:42:38.527537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:47224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.404 [2024-10-01 20:42:38.527546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.404 [2024-10-01 20:42:38.527557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.404 [2024-10-01 20:42:38.527566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.404 [2024-10-01 20:42:38.527578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.404 [2024-10-01 20:42:38.527602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.404 [2024-10-01 20:42:38.527613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:117904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.404 [2024-10-01 20:42:38.527622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.404 [2024-10-01 20:42:38.527633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.404 [2024-10-01 20:42:38.527642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.404 [2024-10-01 20:42:38.527655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:48880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.404 [2024-10-01 20:42:38.527664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.404 [2024-10-01 20:42:38.527675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.404 [2024-10-01 20:42:38.527684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.404 [2024-10-01 20:42:38.527695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:129760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.404 [2024-10-01 20:42:38.527704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.404 [2024-10-01 20:42:38.527715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.404 [2024-10-01 20:42:38.527724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.404 [2024-10-01 20:42:38.527735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:50256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.404 [2024-10-01 20:42:38.527744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.404 [2024-10-01 20:42:38.527755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:89192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.404 [2024-10-01 20:42:38.527764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.404 [2024-10-01 20:42:38.527775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.404 [2024-10-01 20:42:38.527784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.404 [2024-10-01 20:42:38.527796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.404 [2024-10-01 20:42:38.527806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.404 [2024-10-01 20:42:38.527818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:44416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.404 [2024-10-01 20:42:38.527827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.404 [2024-10-01 20:42:38.527838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:34968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.404 [2024-10-01 20:42:38.527847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.404 [2024-10-01 20:42:38.527867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.404 [2024-10-01 20:42:38.527893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.404 [2024-10-01 20:42:38.527904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.404 [2024-10-01 20:42:38.527914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.405 [2024-10-01 20:42:38.527925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.405 [2024-10-01 20:42:38.527934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.405 [2024-10-01 20:42:38.527945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:81576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.405 [2024-10-01 20:42:38.527954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.405 [2024-10-01 20:42:38.527966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.405 [2024-10-01 20:42:38.527975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.405 [2024-10-01 20:42:38.527986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:69008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.405 [2024-10-01 20:42:38.527996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.405 [2024-10-01 20:42:38.528008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:126520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.405 [2024-10-01 20:42:38.528017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.405 [2024-10-01 20:42:38.528029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:34656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.405 [2024-10-01 20:42:38.528038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.405 [2024-10-01 20:42:38.528068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:43.405 [2024-10-01 20:42:38.528080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38760 len:8 PRP1 0x0 PRP2 0x0 00:43:43.405 [2024-10-01 20:42:38.528089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.405 [2024-10-01 20:42:38.528103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:43.405 [2024-10-01 20:42:38.528111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:43.405 [2024-10-01 20:42:38.528119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46448 len:8 PRP1 0x0 PRP2 0x0 00:43:43.405 [2024-10-01 20:42:38.528140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.405 [2024-10-01 20:42:38.528150] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:43.405 [2024-10-01 20:42:38.528158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:43.405 [2024-10-01 20:42:38.528166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44800 len:8 PRP1 0x0 PRP2 0x0 00:43:43.405 [2024-10-01 20:42:38.528175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.405 [2024-10-01 20:42:38.528184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:43.405 [2024-10-01 20:42:38.528194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:43.405 [2024-10-01 20:42:38.528202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121080 len:8 PRP1 0x0 PRP2 0x0 00:43:43.405 [2024-10-01 20:42:38.528211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.405 [2024-10-01 20:42:38.528221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:43.405 [2024-10-01 20:42:38.528228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:43.405 [2024-10-01 20:42:38.528236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39112 len:8 PRP1 0x0 PRP2 0x0 00:43:43.405 [2024-10-01 20:42:38.528245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.405 [2024-10-01 20:42:38.528287] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21bb750 was disconnected and freed. reset controller. 00:43:43.405 [2024-10-01 20:42:38.528416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:43:43.405 [2024-10-01 20:42:38.528434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.405 [2024-10-01 20:42:38.528445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:43:43.405 [2024-10-01 20:42:38.528454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.405 [2024-10-01 20:42:38.528464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:43:43.405 [2024-10-01 20:42:38.528474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.405 [2024-10-01 20:42:38.528483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:43:43.405 [2024-10-01 20:42:38.528493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.405 [2024-10-01 20:42:38.528517] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149870 is same with the state(6) to be set 00:43:43.405 [2024-10-01 20:42:38.528769] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:43.405 [2024-10-01 20:42:38.528792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2149870 (9): Bad file descriptor 00:43:43.405 [2024-10-01 20:42:38.528906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:43.405 [2024-10-01 20:42:38.528929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2149870 with addr=10.0.0.3, port=4420 00:43:43.405 [2024-10-01 20:42:38.528940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149870 is same with the state(6) to be set 00:43:43.405 [2024-10-01 20:42:38.528958] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2149870 (9): Bad file descriptor 00:43:43.405 [2024-10-01 20:42:38.528973] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:43.405 [2024-10-01 20:42:38.528982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:43.405 [2024-10-01 20:42:38.528993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:43.405 [2024-10-01 20:42:38.529013] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:43.405 [2024-10-01 20:42:38.529024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:43.405 20:42:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 97298 00:43:45.530 11474.00 IOPS, 44.82 MiB/s 7649.33 IOPS, 29.88 MiB/s [2024-10-01 20:42:40.529234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:45.530 [2024-10-01 20:42:40.529295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2149870 with addr=10.0.0.3, port=4420 00:43:45.530 [2024-10-01 20:42:40.529309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149870 is same with the state(6) to be set 00:43:45.530 [2024-10-01 20:42:40.529332] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2149870 (9): Bad file descriptor 00:43:45.530 [2024-10-01 20:42:40.529361] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:45.530 [2024-10-01 20:42:40.529372] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:45.530 [2024-10-01 20:42:40.529382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:45.530 [2024-10-01 20:42:40.529406] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:45.530 [2024-10-01 20:42:40.529416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:47.424 5737.00 IOPS, 22.41 MiB/s 4589.60 IOPS, 17.93 MiB/s [2024-10-01 20:42:42.529577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:47.424 [2024-10-01 20:42:42.529635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2149870 with addr=10.0.0.3, port=4420 00:43:47.424 [2024-10-01 20:42:42.529649] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149870 is same with the state(6) to be set 00:43:47.424 [2024-10-01 20:42:42.529669] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2149870 (9): Bad file descriptor 00:43:47.424 [2024-10-01 20:42:42.529685] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:47.424 [2024-10-01 20:42:42.529694] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:47.424 [2024-10-01 20:42:42.529705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:47.424 [2024-10-01 20:42:42.529726] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:47.424 [2024-10-01 20:42:42.529737] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:49.290 3824.67 IOPS, 14.94 MiB/s 3278.29 IOPS, 12.81 MiB/s [2024-10-01 20:42:44.529858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:49.290 [2024-10-01 20:42:44.529908] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:49.290 [2024-10-01 20:42:44.529918] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:49.290 [2024-10-01 20:42:44.529928] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:43:49.290 [2024-10-01 20:42:44.529950] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:50.485 2868.50 IOPS, 11.21 MiB/s 00:43:50.485 Latency(us) 00:43:50.485 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:50.485 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:43:50.485 NVMe0n1 : 8.14 2817.62 11.01 15.72 0.00 45136.36 3485.32 7015926.69 00:43:50.485 =================================================================================================================== 00:43:50.485 Total : 2817.62 11.01 15.72 0.00 45136.36 3485.32 7015926.69 00:43:50.485 { 00:43:50.485 "results": [ 00:43:50.485 { 00:43:50.485 "job": "NVMe0n1", 00:43:50.485 "core_mask": "0x4", 00:43:50.485 "workload": "randread", 00:43:50.485 "status": "finished", 00:43:50.485 "queue_depth": 128, 00:43:50.485 "io_size": 4096, 00:43:50.485 "runtime": 8.14445, 00:43:50.485 "iops": 2817.6242717433342, 00:43:50.485 "mibps": 11.0063448114974, 00:43:50.485 "io_failed": 128, 00:43:50.485 "io_timeout": 0, 00:43:50.485 "avg_latency_us": 45136.36186892324, 00:43:50.485 "min_latency_us": 3485.3236363636365, 00:43:50.485 "max_latency_us": 7015926.69090909 00:43:50.485 } 00:43:50.485 ], 00:43:50.485 "core_count": 1 00:43:50.485 } 00:43:50.485 20:42:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:43:50.485 Attaching 5 probes... 00:43:50.485 1360.915624: reset bdev controller NVMe0 00:43:50.485 1360.981510: reconnect bdev controller NVMe0 00:43:50.485 3361.268343: reconnect delay bdev controller NVMe0 00:43:50.485 3361.306794: reconnect bdev controller NVMe0 00:43:50.485 5361.628075: reconnect delay bdev controller NVMe0 00:43:50.485 5361.643582: reconnect bdev controller NVMe0 00:43:50.485 7361.977569: reconnect delay bdev controller NVMe0 00:43:50.485 7361.993272: reconnect bdev controller NVMe0 00:43:50.485 20:42:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:43:50.485 20:42:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:43:50.485 20:42:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 97239 00:43:50.485 20:42:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:43:50.485 20:42:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 97211 00:43:50.485 20:42:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 97211 ']' 00:43:50.485 20:42:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 97211 00:43:50.485 20:42:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:43:50.485 20:42:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:50.485 20:42:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97211 00:43:50.485 killing process with pid 97211 00:43:50.485 Received shutdown signal, test time was about 8.214771 seconds 00:43:50.485 00:43:50.485 Latency(us) 00:43:50.485 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:50.485 =================================================================================================================== 00:43:50.485 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:50.485 20:42:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:43:50.485 20:42:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:43:50.485 20:42:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97211' 00:43:50.485 20:42:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 97211 00:43:50.485 20:42:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 97211 00:43:50.744 20:42:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:51.004 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:43:51.004 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:43:51.004 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@514 -- # nvmfcleanup 00:43:51.004 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:43:51.004 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:51.004 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:43:51.004 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:51.004 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:51.004 rmmod nvme_tcp 00:43:51.004 rmmod nvme_fabrics 00:43:51.004 rmmod nvme_keyring 00:43:51.004 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:51.004 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:43:51.004 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:43:51.004 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@515 -- # '[' -n 96641 ']' 00:43:51.004 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # killprocess 96641 00:43:51.004 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 96641 ']' 00:43:51.004 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 96641 00:43:51.004 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:43:51.004 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:51.004 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96641 00:43:51.004 killing process with pid 96641 00:43:51.004 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:43:51.004 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:43:51.004 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96641' 00:43:51.004 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 96641 00:43:51.004 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 96641 00:43:51.262 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:43:51.262 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:43:51.262 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:43:51.262 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:43:51.262 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@789 -- # iptables-save 00:43:51.262 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:43:51.262 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@789 -- # iptables-restore 00:43:51.262 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:51.262 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:43:51.262 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:43:51.262 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:43:51.262 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:43:51.262 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:43:51.262 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:43:51.262 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:43:51.262 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:43:51.262 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:43:51.262 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:43:51.262 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:43:51.521 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:43:51.521 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:43:51.521 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:43:51.521 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:43:51.521 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:51.521 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:51.521 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:51.521 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:43:51.521 00:43:51.521 real 0m47.062s 00:43:51.521 user 2m19.316s 00:43:51.521 sys 0m4.447s 00:43:51.521 ************************************ 00:43:51.521 END TEST nvmf_timeout 00:43:51.521 ************************************ 00:43:51.521 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:51.521 20:42:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:43:51.521 20:42:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:43:51.521 20:42:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:43:51.521 ************************************ 00:43:51.521 END TEST nvmf_host 00:43:51.521 ************************************ 00:43:51.521 00:43:51.521 real 5m35.703s 00:43:51.521 user 14m25.851s 00:43:51.521 sys 1m0.871s 00:43:51.521 20:42:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:51.521 20:42:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:43:51.521 20:42:46 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:43:51.521 20:42:46 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:43:51.521 20:42:46 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:43:51.521 20:42:46 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:43:51.521 20:42:46 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:51.521 20:42:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:51.521 ************************************ 00:43:51.521 START TEST nvmf_target_core_interrupt_mode 00:43:51.521 ************************************ 00:43:51.521 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:43:51.780 * Looking for test storage... 00:43:51.780 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:43:51.780 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lcov --version 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:43:51.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:51.781 --rc genhtml_branch_coverage=1 00:43:51.781 --rc genhtml_function_coverage=1 00:43:51.781 --rc genhtml_legend=1 00:43:51.781 --rc geninfo_all_blocks=1 00:43:51.781 --rc geninfo_unexecuted_blocks=1 00:43:51.781 00:43:51.781 ' 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:43:51.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:51.781 --rc genhtml_branch_coverage=1 00:43:51.781 --rc genhtml_function_coverage=1 00:43:51.781 --rc genhtml_legend=1 00:43:51.781 --rc geninfo_all_blocks=1 00:43:51.781 --rc geninfo_unexecuted_blocks=1 00:43:51.781 00:43:51.781 ' 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:43:51.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:51.781 --rc genhtml_branch_coverage=1 00:43:51.781 --rc genhtml_function_coverage=1 00:43:51.781 --rc genhtml_legend=1 00:43:51.781 --rc geninfo_all_blocks=1 00:43:51.781 --rc geninfo_unexecuted_blocks=1 00:43:51.781 00:43:51.781 ' 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:43:51.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:51.781 --rc genhtml_branch_coverage=1 00:43:51.781 --rc genhtml_function_coverage=1 00:43:51.781 --rc genhtml_legend=1 00:43:51.781 --rc geninfo_all_blocks=1 00:43:51.781 --rc geninfo_unexecuted_blocks=1 00:43:51.781 00:43:51.781 ' 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:51.781 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:51.782 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:51.782 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:51.782 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:51.782 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:51.782 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:51.782 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:51.782 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:51.782 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:51.782 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:43:51.782 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:43:51.782 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:43:51.782 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:43:51.782 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:43:51.782 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:51.782 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:51.782 ************************************ 00:43:51.782 START TEST nvmf_abort 00:43:51.782 ************************************ 00:43:51.782 20:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:43:51.782 * Looking for test storage... 00:43:51.782 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:43:51.782 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:43:51.782 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:43:51.782 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:43:52.041 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:43:52.041 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:52.041 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:52.041 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:52.041 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:43:52.041 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:43:52.041 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:43:52.041 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:43:52.041 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:43:52.041 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:43:52.041 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:43:52.041 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:52.041 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:43:52.041 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:43:52.041 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:52.041 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:52.041 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:43:52.041 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:43:52.041 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:52.041 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:43:52.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:52.042 --rc genhtml_branch_coverage=1 00:43:52.042 --rc genhtml_function_coverage=1 00:43:52.042 --rc genhtml_legend=1 00:43:52.042 --rc geninfo_all_blocks=1 00:43:52.042 --rc geninfo_unexecuted_blocks=1 00:43:52.042 00:43:52.042 ' 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:43:52.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:52.042 --rc genhtml_branch_coverage=1 00:43:52.042 --rc genhtml_function_coverage=1 00:43:52.042 --rc genhtml_legend=1 00:43:52.042 --rc geninfo_all_blocks=1 00:43:52.042 --rc geninfo_unexecuted_blocks=1 00:43:52.042 00:43:52.042 ' 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:43:52.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:52.042 --rc genhtml_branch_coverage=1 00:43:52.042 --rc genhtml_function_coverage=1 00:43:52.042 --rc genhtml_legend=1 00:43:52.042 --rc geninfo_all_blocks=1 00:43:52.042 --rc geninfo_unexecuted_blocks=1 00:43:52.042 00:43:52.042 ' 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:43:52.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:52.042 --rc genhtml_branch_coverage=1 00:43:52.042 --rc genhtml_function_coverage=1 00:43:52.042 --rc genhtml_legend=1 00:43:52.042 --rc geninfo_all_blocks=1 00:43:52.042 --rc geninfo_unexecuted_blocks=1 00:43:52.042 00:43:52.042 ' 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:43:52.042 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@458 -- # nvmf_veth_init 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:43:52.043 Cannot find device "nvmf_init_br" 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # true 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:43:52.043 Cannot find device "nvmf_init_br2" 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # true 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:43:52.043 Cannot find device "nvmf_tgt_br" 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # true 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:43:52.043 Cannot find device "nvmf_tgt_br2" 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # true 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:43:52.043 Cannot find device "nvmf_init_br" 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # true 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:43:52.043 Cannot find device "nvmf_init_br2" 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # true 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:43:52.043 Cannot find device "nvmf_tgt_br" 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # true 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:43:52.043 Cannot find device "nvmf_tgt_br2" 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # true 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:43:52.043 Cannot find device "nvmf_br" 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # true 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:43:52.043 Cannot find device "nvmf_init_if" 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # true 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:43:52.043 Cannot find device "nvmf_init_if2" 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # true 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:43:52.043 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # true 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:43:52.043 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # true 00:43:52.043 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:43:52.302 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:43:52.303 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:43:52.303 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:43:52.303 00:43:52.303 --- 10.0.0.3 ping statistics --- 00:43:52.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:52.303 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:43:52.303 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:43:52.303 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.068 ms 00:43:52.303 00:43:52.303 --- 10.0.0.4 ping statistics --- 00:43:52.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:52.303 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:43:52.303 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:52.303 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:43:52.303 00:43:52.303 --- 10.0.0.1 ping statistics --- 00:43:52.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:52.303 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:43:52.303 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:52.303 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:43:52.303 00:43:52.303 --- 10.0.0.2 ping statistics --- 00:43:52.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:52.303 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@459 -- # return 0 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=97707 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 97707 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 97707 ']' 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:52.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:52.303 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:43:52.563 [2024-10-01 20:42:47.594018] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:52.563 [2024-10-01 20:42:47.595148] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:43:52.563 [2024-10-01 20:42:47.595222] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:52.563 [2024-10-01 20:42:47.725840] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:43:52.563 [2024-10-01 20:42:47.776738] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:52.563 [2024-10-01 20:42:47.777019] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:52.563 [2024-10-01 20:42:47.777223] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:52.563 [2024-10-01 20:42:47.777343] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:52.563 [2024-10-01 20:42:47.777383] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:52.563 [2024-10-01 20:42:47.777611] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:43:52.563 [2024-10-01 20:42:47.777787] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:43:52.563 [2024-10-01 20:42:47.777790] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:43:52.822 [2024-10-01 20:42:47.867513] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:52.822 [2024-10-01 20:42:47.867911] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:52.822 [2024-10-01 20:42:47.868110] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:43:52.822 [2024-10-01 20:42:47.869207] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:52.822 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:52.822 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:43:52.822 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:43:52.822 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:52.822 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:43:52.822 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:52.822 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:43:52.822 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:52.822 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:43:52.822 [2024-10-01 20:42:47.955744] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:52.822 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:52.822 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:43:52.822 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:52.822 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:43:52.822 Malloc0 00:43:52.822 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:52.822 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:43:52.822 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:52.822 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:43:52.822 Delay0 00:43:52.822 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:52.823 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:43:52.823 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:52.823 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:43:52.823 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:52.823 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:43:52.823 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:52.823 20:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:43:52.823 20:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:52.823 20:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:43:52.823 20:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:52.823 20:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:43:52.823 [2024-10-01 20:42:48.011766] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:43:52.823 20:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:52.823 20:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:43:52.823 20:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:52.823 20:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:43:52.823 20:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:52.823 20:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:43:53.081 [2024-10-01 20:42:48.192700] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:43:54.985 Initializing NVMe Controllers 00:43:54.985 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:43:54.985 controller IO queue size 128 less than required 00:43:54.985 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:43:54.985 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:43:54.985 Initialization complete. Launching workers. 00:43:54.985 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 30096 00:43:54.985 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 30153, failed to submit 66 00:43:54.985 success 30096, unsuccessful 57, failed 0 00:43:54.985 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:54.985 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:54.985 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:43:54.985 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:54.985 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:43:54.985 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:43:54.985 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:43:54.985 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:43:55.244 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:55.244 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:43:55.244 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:55.244 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:55.244 rmmod nvme_tcp 00:43:55.244 rmmod nvme_fabrics 00:43:55.244 rmmod nvme_keyring 00:43:55.244 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:55.244 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:43:55.244 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:43:55.244 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 97707 ']' 00:43:55.244 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 97707 00:43:55.244 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 97707 ']' 00:43:55.244 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 97707 00:43:55.244 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:43:55.244 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:55.244 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97707 00:43:55.244 killing process with pid 97707 00:43:55.244 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:43:55.244 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:43:55.244 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97707' 00:43:55.244 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 97707 00:43:55.244 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 97707 00:43:55.502 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:43:55.502 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:43:55.502 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:43:55.502 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:43:55.502 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:43:55.502 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:43:55.502 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:43:55.502 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:55.503 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:43:55.503 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:43:55.503 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:43:55.503 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:43:55.503 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:43:55.503 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:43:55.503 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:43:55.503 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:43:55.503 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:43:55.503 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:43:55.503 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:43:55.761 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:43:55.761 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:43:55.761 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:43:55.761 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 00:43:55.761 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:55.761 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:55.761 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:55.761 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@300 -- # return 0 00:43:55.761 00:43:55.761 real 0m3.916s 00:43:55.761 user 0m8.846s 00:43:55.761 sys 0m1.501s 00:43:55.761 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:55.761 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:43:55.761 ************************************ 00:43:55.761 END TEST nvmf_abort 00:43:55.761 ************************************ 00:43:55.761 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:43:55.761 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:43:55.761 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:55.761 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:55.761 ************************************ 00:43:55.761 START TEST nvmf_ns_hotplug_stress 00:43:55.761 ************************************ 00:43:55.761 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:43:55.761 * Looking for test storage... 00:43:55.761 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:43:55.761 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:43:55.761 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:43:55.761 20:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:43:56.019 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:43:56.019 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:56.019 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:56.019 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:56.019 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:43:56.019 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:43:56.019 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:43:56.019 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:43:56.019 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:43:56.019 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:43:56.019 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:43:56.019 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:56.019 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:43:56.019 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:43:56.019 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:56.019 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:56.019 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:43:56.019 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:43:56.019 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:43:56.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:56.020 --rc genhtml_branch_coverage=1 00:43:56.020 --rc genhtml_function_coverage=1 00:43:56.020 --rc genhtml_legend=1 00:43:56.020 --rc geninfo_all_blocks=1 00:43:56.020 --rc geninfo_unexecuted_blocks=1 00:43:56.020 00:43:56.020 ' 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:43:56.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:56.020 --rc genhtml_branch_coverage=1 00:43:56.020 --rc genhtml_function_coverage=1 00:43:56.020 --rc genhtml_legend=1 00:43:56.020 --rc geninfo_all_blocks=1 00:43:56.020 --rc geninfo_unexecuted_blocks=1 00:43:56.020 00:43:56.020 ' 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:43:56.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:56.020 --rc genhtml_branch_coverage=1 00:43:56.020 --rc genhtml_function_coverage=1 00:43:56.020 --rc genhtml_legend=1 00:43:56.020 --rc geninfo_all_blocks=1 00:43:56.020 --rc geninfo_unexecuted_blocks=1 00:43:56.020 00:43:56.020 ' 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:43:56.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:56.020 --rc genhtml_branch_coverage=1 00:43:56.020 --rc genhtml_function_coverage=1 00:43:56.020 --rc genhtml_legend=1 00:43:56.020 --rc geninfo_all_blocks=1 00:43:56.020 --rc geninfo_unexecuted_blocks=1 00:43:56.020 00:43:56.020 ' 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:56.020 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # nvmf_veth_init 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:43:56.021 Cannot find device "nvmf_init_br" 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:43:56.021 Cannot find device "nvmf_init_br2" 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:43:56.021 Cannot find device "nvmf_tgt_br" 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:43:56.021 Cannot find device "nvmf_tgt_br2" 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:43:56.021 Cannot find device "nvmf_init_br" 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:43:56.021 Cannot find device "nvmf_init_br2" 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:43:56.021 Cannot find device "nvmf_tgt_br" 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:43:56.021 Cannot find device "nvmf_tgt_br2" 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:43:56.021 Cannot find device "nvmf_br" 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:43:56.021 Cannot find device "nvmf_init_if" 00:43:56.021 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 00:43:56.022 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:43:56.022 Cannot find device "nvmf_init_if2" 00:43:56.022 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 00:43:56.022 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:43:56.022 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:43:56.022 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 00:43:56.022 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:43:56.022 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:43:56.022 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 00:43:56.022 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:43:56.022 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:43:56.022 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:43:56.022 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:43:56.022 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:43:56.280 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:43:56.280 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:43:56.280 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:43:56.280 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:43:56.280 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:43:56.280 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:43:56.280 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:43:56.280 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:43:56.280 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:43:56.280 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:43:56.280 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:43:56.280 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:43:56.280 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:43:56.280 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:43:56.280 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:43:56.280 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:43:56.280 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:43:56.280 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:43:56.280 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:43:56.280 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:43:56.280 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:43:56.280 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:43:56.281 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:43:56.281 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:43:56.281 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:43:56.281 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:43:56.281 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:43:56.281 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:43:56.281 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:43:56.281 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:43:56.281 00:43:56.281 --- 10.0.0.3 ping statistics --- 00:43:56.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:56.281 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:43:56.281 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:43:56.281 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:43:56.281 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:43:56.281 00:43:56.281 --- 10.0.0.4 ping statistics --- 00:43:56.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:56.281 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:43:56.281 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:43:56.281 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:56.281 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:43:56.281 00:43:56.281 --- 10.0.0.1 ping statistics --- 00:43:56.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:56.281 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:43:56.281 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:43:56.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:56.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:43:56.281 00:43:56.281 --- 10.0.0.2 ping statistics --- 00:43:56.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:56.281 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:43:56.281 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:56.281 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # return 0 00:43:56.281 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:43:56.281 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:56.281 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:43:56.281 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:43:56.281 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:56.281 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:43:56.281 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:43:56.281 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:43:56.281 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:43:56.281 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:56.281 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:43:56.281 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=97983 00:43:56.281 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 97983 00:43:56.281 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 97983 ']' 00:43:56.281 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:43:56.281 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:56.281 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:56.281 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:56.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:56.281 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:56.281 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:43:56.540 [2024-10-01 20:42:51.541327] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:56.540 [2024-10-01 20:42:51.542299] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:43:56.540 [2024-10-01 20:42:51.542377] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:56.540 [2024-10-01 20:42:51.671337] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:43:56.540 [2024-10-01 20:42:51.721995] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:56.540 [2024-10-01 20:42:51.722056] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:56.540 [2024-10-01 20:42:51.722065] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:56.540 [2024-10-01 20:42:51.722072] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:56.540 [2024-10-01 20:42:51.722078] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:56.540 [2024-10-01 20:42:51.722225] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:43:56.540 [2024-10-01 20:42:51.722305] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:43:56.540 [2024-10-01 20:42:51.722308] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:43:56.799 [2024-10-01 20:42:51.809334] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:56.799 [2024-10-01 20:42:51.810059] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:43:56.799 [2024-10-01 20:42:51.810091] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:56.799 [2024-10-01 20:42:51.810891] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:56.799 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:56.799 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:43:56.799 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:43:56.799 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:56.799 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:43:56.799 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:56.799 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:43:56.799 20:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:43:57.058 [2024-10-01 20:42:52.171295] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:57.058 20:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:43:57.316 20:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:43:57.574 [2024-10-01 20:42:52.615707] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:43:57.574 20:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:43:57.832 20:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:43:58.090 Malloc0 00:43:58.090 20:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:43:58.349 Delay0 00:43:58.349 20:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:43:58.607 20:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:43:58.866 NULL1 00:43:58.866 20:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:43:59.124 20:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:43:59.124 20:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=98095 00:43:59.124 20:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98095 00:43:59.124 20:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:00.499 Read completed with error (sct=0, sc=11) 00:44:00.499 20:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:44:00.499 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:44:00.499 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:44:00.499 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:44:00.499 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:44:00.499 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:44:00.499 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:44:00.499 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:44:00.499 20:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:44:00.499 20:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:44:00.757 true 00:44:00.757 20:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98095 00:44:00.757 20:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:01.692 20:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:44:01.692 20:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:44:01.692 20:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:44:01.951 true 00:44:01.951 20:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98095 00:44:01.951 20:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:02.211 20:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:44:02.536 20:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:44:02.536 20:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:44:02.815 true 00:44:02.815 20:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98095 00:44:02.815 20:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:03.752 20:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:44:03.752 20:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:44:03.752 20:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:44:04.009 true 00:44:04.009 20:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98095 00:44:04.009 20:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:04.266 20:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:44:04.525 20:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:44:04.525 20:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:44:04.783 true 00:44:04.783 20:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98095 00:44:04.783 20:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:05.042 20:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:44:05.300 20:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:44:05.300 20:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:44:05.558 true 00:44:05.558 20:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98095 00:44:05.558 20:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:06.493 20:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:44:06.751 20:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:44:06.751 20:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:44:07.010 true 00:44:07.010 20:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98095 00:44:07.010 20:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:07.268 20:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:44:07.527 20:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:44:07.527 20:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:44:07.786 true 00:44:07.786 20:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98095 00:44:07.786 20:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:08.045 20:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:44:08.304 20:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:44:08.304 20:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:44:08.562 true 00:44:08.562 20:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98095 00:44:08.562 20:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:09.497 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:44:09.497 20:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:44:09.497 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:44:09.756 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:44:09.756 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:44:09.756 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:44:09.756 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:44:09.756 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:44:09.756 20:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:44:09.756 20:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:44:10.015 true 00:44:10.015 20:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98095 00:44:10.016 20:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:10.952 20:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:44:10.952 20:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:44:10.952 20:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:44:11.210 true 00:44:11.210 20:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98095 00:44:11.210 20:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:11.468 20:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:44:11.727 20:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:44:11.727 20:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:44:11.985 true 00:44:11.985 20:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98095 00:44:11.985 20:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:12.920 20:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:44:13.178 20:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:44:13.178 20:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:44:13.437 true 00:44:13.437 20:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98095 00:44:13.437 20:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:13.437 20:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:44:13.695 20:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:44:13.695 20:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:44:13.952 true 00:44:13.952 20:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98095 00:44:13.952 20:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:14.885 20:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:44:15.143 20:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:44:15.143 20:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:44:15.401 true 00:44:15.401 20:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98095 00:44:15.401 20:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:15.659 20:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:44:15.917 20:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:44:15.917 20:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:44:16.175 true 00:44:16.175 20:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98095 00:44:16.175 20:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:16.433 20:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:44:16.691 20:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:44:16.691 20:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:44:16.950 true 00:44:16.950 20:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98095 00:44:16.950 20:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:17.885 20:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:44:17.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:44:18.145 20:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:44:18.145 20:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:44:18.404 true 00:44:18.404 20:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98095 00:44:18.404 20:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:18.663 20:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:44:18.922 20:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:44:18.922 20:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:44:19.180 true 00:44:19.180 20:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98095 00:44:19.180 20:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:20.117 20:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:44:20.117 20:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:44:20.117 20:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:44:20.376 true 00:44:20.376 20:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98095 00:44:20.376 20:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:20.634 20:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:44:20.893 20:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:44:20.893 20:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:44:21.151 true 00:44:21.151 20:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98095 00:44:21.151 20:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:22.088 20:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:44:22.088 20:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:44:22.088 20:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:44:22.361 true 00:44:22.361 20:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98095 00:44:22.361 20:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:22.620 20:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:44:22.879 20:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:44:22.879 20:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:44:23.137 true 00:44:23.137 20:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98095 00:44:23.137 20:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:24.073 20:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:44:24.073 20:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:44:24.073 20:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:44:24.331 true 00:44:24.331 20:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98095 00:44:24.331 20:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:24.590 20:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:44:24.850 20:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:44:24.850 20:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:44:25.153 true 00:44:25.153 20:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98095 00:44:25.153 20:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:25.415 20:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:44:25.675 20:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:44:25.675 20:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:44:25.934 true 00:44:25.934 20:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98095 00:44:25.934 20:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:26.872 20:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:44:27.131 20:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:44:27.131 20:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:44:27.390 true 00:44:27.390 20:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98095 00:44:27.390 20:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:27.650 20:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:44:27.909 20:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:44:27.909 20:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:44:28.169 true 00:44:28.169 20:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98095 00:44:28.169 20:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:28.428 20:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:44:28.687 20:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:44:28.687 20:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:44:28.947 true 00:44:28.947 20:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98095 00:44:28.947 20:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:29.884 Initializing NVMe Controllers 00:44:29.884 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:44:29.884 Controller IO queue size 128, less than required. 00:44:29.884 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:44:29.884 Controller IO queue size 128, less than required. 00:44:29.884 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:44:29.884 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:44:29.884 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:44:29.884 Initialization complete. Launching workers. 00:44:29.884 ======================================================== 00:44:29.884 Latency(us) 00:44:29.884 Device Information : IOPS MiB/s Average min max 00:44:29.884 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 576.23 0.28 112858.65 2816.01 1033046.99 00:44:29.884 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11186.63 5.46 11442.72 1385.93 525952.67 00:44:29.884 ======================================================== 00:44:29.884 Total : 11762.87 5.74 16410.84 1385.93 1033046.99 00:44:29.884 00:44:29.884 20:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:44:30.144 20:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:44:30.144 20:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:44:30.403 true 00:44:30.403 20:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98095 00:44:30.403 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (98095) - No such process 00:44:30.403 20:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 98095 00:44:30.403 20:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:30.662 20:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:44:30.921 20:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:44:30.921 20:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:44:30.921 20:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:44:30.921 20:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:44:30.921 20:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:44:31.181 null0 00:44:31.181 20:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:44:31.181 20:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:44:31.181 20:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:44:31.181 null1 00:44:31.181 20:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:44:31.181 20:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:44:31.181 20:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:44:31.440 null2 00:44:31.440 20:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:44:31.440 20:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:44:31.440 20:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:44:31.699 null3 00:44:31.699 20:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:44:31.699 20:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:44:31.699 20:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:44:31.958 null4 00:44:31.958 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:44:31.959 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:44:31.959 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:44:32.218 null5 00:44:32.218 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:44:32.218 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:44:32.218 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:44:32.478 null6 00:44:32.478 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:44:32.478 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:44:32.478 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:44:32.478 null7 00:44:32.478 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:44:32.478 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:44:32.478 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:44:32.478 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:44:32.478 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:44:32.478 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:44:32.478 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:44:32.478 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:44:32.478 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:44:32.478 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:44:32.478 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:32.478 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:44:32.478 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:44:32.478 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:44:32.478 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:44:32.478 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:44:32.478 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:44:32.478 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:32.478 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:44:32.478 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:44:32.478 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:44:32.478 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:44:32.478 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:44:32.478 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:44:32.478 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:32.478 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:44:32.738 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:44:32.738 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:44:32.738 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:44:32.738 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:44:32.738 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:44:32.738 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:44:32.738 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:44:32.738 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:32.738 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:44:32.738 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:44:32.738 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:44:32.738 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:44:32.738 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:44:32.738 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:44:32.738 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:44:32.738 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:44:32.738 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:32.738 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:44:32.738 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:44:32.738 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:44:32.738 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:44:32.738 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:44:32.738 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:44:32.738 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:44:32.738 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:32.738 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:44:32.738 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:44:32.738 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:44:32.738 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:44:32.738 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:44:32.738 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:44:32.738 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:44:32.738 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:44:32.738 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:32.738 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:44:32.738 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:44:32.739 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:44:32.739 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 99135 99136 99139 99141 99142 99144 99146 99148 00:44:32.739 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:44:32.739 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:44:32.739 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:44:32.739 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:32.739 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:44:32.997 20:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:44:32.997 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:44:32.997 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:32.997 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:44:32.997 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:44:32.997 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:44:32.997 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:44:32.997 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:44:33.256 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:33.256 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:33.256 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:44:33.256 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:33.256 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:33.256 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:44:33.256 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:33.256 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:33.256 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:44:33.256 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:33.256 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:33.256 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:44:33.256 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:33.256 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:33.256 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:44:33.256 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:33.256 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:33.256 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:44:33.256 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:33.256 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:33.256 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:44:33.256 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:33.256 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:33.256 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:44:33.515 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:44:33.515 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:33.515 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:44:33.515 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:44:33.515 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:44:33.515 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:44:33.515 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:44:33.515 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:44:33.774 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:33.774 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:33.774 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:44:33.774 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:33.774 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:33.774 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:44:33.774 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:33.774 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:33.774 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:44:33.774 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:33.774 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:33.774 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:44:33.774 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:33.774 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:33.774 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:44:33.774 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:33.774 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:33.774 20:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:44:34.031 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:34.031 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:34.031 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:44:34.031 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:34.031 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:34.031 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:44:34.031 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:44:34.031 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:34.031 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:44:34.031 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:44:34.031 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:44:34.031 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:44:34.289 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:44:34.289 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:44:34.289 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:34.289 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:34.289 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:44:34.289 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:34.289 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:34.289 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:44:34.289 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:34.289 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:34.289 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:44:34.289 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:34.290 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:34.290 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:44:34.290 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:34.290 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:34.290 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:44:34.290 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:34.290 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:34.290 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:44:34.548 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:34.548 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:34.548 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:44:34.548 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:34.548 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:34.548 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:44:34.548 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:44:34.548 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:44:34.548 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:44:34.807 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:44:34.807 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:34.807 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:44:34.807 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:44:34.807 20:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:44:35.066 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:35.066 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:35.066 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:35.066 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:35.066 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:44:35.066 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:44:35.066 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:35.066 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:35.066 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:44:35.066 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:35.066 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:35.066 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:44:35.066 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:35.066 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:35.066 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:44:35.066 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:35.066 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:35.066 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:44:35.066 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:35.066 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:35.066 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:44:35.066 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:35.066 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:35.066 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:44:35.325 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:44:35.325 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:44:35.325 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:44:35.325 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:44:35.325 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:35.325 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:44:35.325 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:44:35.325 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:44:35.583 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:35.583 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:35.583 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:44:35.583 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:35.583 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:35.583 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:44:35.583 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:35.583 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:35.583 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:44:35.583 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:35.583 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:35.583 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:44:35.583 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:35.583 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:35.584 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:44:35.584 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:35.584 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:35.584 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:44:35.842 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:35.842 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:35.842 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:44:35.842 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:35.842 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:35.842 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:44:35.842 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:44:35.842 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:44:35.842 20:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:44:35.842 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:44:35.842 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:36.101 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:44:36.101 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:44:36.101 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:36.101 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:36.101 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:44:36.101 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:44:36.101 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:36.101 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:36.101 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:44:36.101 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:36.101 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:36.101 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:44:36.101 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:36.101 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:36.101 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:44:36.359 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:36.359 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:36.359 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:44:36.359 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:36.359 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:36.359 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:44:36.359 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:36.359 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:36.359 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:44:36.359 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:44:36.359 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:36.359 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:36.359 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:44:36.359 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:44:36.359 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:44:36.618 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:44:36.618 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:36.618 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:36.618 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:36.618 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:44:36.618 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:44:36.618 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:44:36.618 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:44:36.618 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:36.618 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:36.618 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:44:36.876 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:36.876 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:36.876 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:44:36.876 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:36.876 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:36.876 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:44:36.876 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:36.876 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:36.876 20:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:44:36.876 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:44:36.876 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:36.876 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:36.877 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:44:36.877 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:36.877 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:36.877 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:44:36.877 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:36.877 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:36.877 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:44:37.135 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:44:37.135 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:44:37.135 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:44:37.135 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:37.135 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:37.135 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:37.135 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:44:37.135 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:44:37.394 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:44:37.394 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:37.394 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:37.394 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:44:37.394 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:37.394 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:37.394 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:44:37.394 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:44:37.394 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:37.394 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:37.394 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:44:37.394 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:37.394 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:37.394 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:44:37.394 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:44:37.653 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:37.653 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:37.653 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:44:37.653 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:37.653 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:37.653 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:44:37.653 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:44:37.653 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:44:37.653 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:37.653 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:37.653 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:44:37.653 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:44:37.653 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:37.653 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:37.653 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:44:37.912 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:37.912 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:44:37.912 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:37.912 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:37.912 20:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:44:37.912 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:44:37.912 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:37.912 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:37.912 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:44:37.912 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:44:37.912 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:37.912 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:37.912 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:44:37.912 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:44:38.170 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:38.170 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:38.170 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:44:38.170 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:38.170 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:38.170 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:44:38.170 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:44:38.170 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:38.170 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:38.170 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:44:38.170 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:38.170 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:38.170 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:44:38.170 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:38.170 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:38.429 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:44:38.429 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:44:38.429 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:44:38.429 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:38.429 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:38.429 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:44:38.429 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:44:38.429 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:44:38.688 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:38.688 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:38.688 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:38.688 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:38.688 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:38.688 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:38.688 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:38.688 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:38.688 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:38.688 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:38.688 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:44:38.688 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:44:38.688 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:44:38.688 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:44:38.688 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:44:38.688 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:44:38.688 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:38.688 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:44:38.688 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:38.688 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:38.688 rmmod nvme_tcp 00:44:38.688 rmmod nvme_fabrics 00:44:38.948 rmmod nvme_keyring 00:44:38.948 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:38.948 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:44:38.948 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:44:38.948 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 97983 ']' 00:44:38.948 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 97983 00:44:38.948 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 97983 ']' 00:44:38.948 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 97983 00:44:38.948 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:44:38.948 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:44:38.948 20:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97983 00:44:38.948 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:44:38.948 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:44:38.948 killing process with pid 97983 00:44:38.948 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97983' 00:44:38.948 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 97983 00:44:38.948 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 97983 00:44:39.207 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:44:39.207 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:44:39.207 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:44:39.207 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:44:39.207 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:44:39.207 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:44:39.207 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:44:39.207 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:39.207 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:44:39.207 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:44:39.207 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:44:39.207 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:44:39.207 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:44:39.208 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:44:39.208 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:44:39.208 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:44:39.208 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:44:39.208 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:44:39.208 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:44:39.208 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:44:39.208 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:44:39.208 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:44:39.208 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:44:39.208 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:39.208 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:39.208 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 00:44:39.467 00:44:39.467 real 0m43.585s 00:44:39.467 user 3m14.878s 00:44:39.467 sys 0m19.451s 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:39.467 ************************************ 00:44:39.467 END TEST nvmf_ns_hotplug_stress 00:44:39.467 ************************************ 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:44:39.467 ************************************ 00:44:39.467 START TEST nvmf_delete_subsystem 00:44:39.467 ************************************ 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:44:39.467 * Looking for test storage... 00:44:39.467 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:44:39.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:39.467 --rc genhtml_branch_coverage=1 00:44:39.467 --rc genhtml_function_coverage=1 00:44:39.467 --rc genhtml_legend=1 00:44:39.467 --rc geninfo_all_blocks=1 00:44:39.467 --rc geninfo_unexecuted_blocks=1 00:44:39.467 00:44:39.467 ' 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:44:39.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:39.467 --rc genhtml_branch_coverage=1 00:44:39.467 --rc genhtml_function_coverage=1 00:44:39.467 --rc genhtml_legend=1 00:44:39.467 --rc geninfo_all_blocks=1 00:44:39.467 --rc geninfo_unexecuted_blocks=1 00:44:39.467 00:44:39.467 ' 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:44:39.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:39.467 --rc genhtml_branch_coverage=1 00:44:39.467 --rc genhtml_function_coverage=1 00:44:39.467 --rc genhtml_legend=1 00:44:39.467 --rc geninfo_all_blocks=1 00:44:39.467 --rc geninfo_unexecuted_blocks=1 00:44:39.467 00:44:39.467 ' 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:44:39.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:39.467 --rc genhtml_branch_coverage=1 00:44:39.467 --rc genhtml_function_coverage=1 00:44:39.467 --rc genhtml_legend=1 00:44:39.467 --rc geninfo_all_blocks=1 00:44:39.467 --rc geninfo_unexecuted_blocks=1 00:44:39.467 00:44:39.467 ' 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:39.467 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:39.468 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:39.468 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:39.727 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:44:39.727 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:44:39.727 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:39.727 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:39.727 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:44:39.727 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:39.727 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:44:39.727 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:44:39.727 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:39.727 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:39.727 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # nvmf_veth_init 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:44:39.728 Cannot find device "nvmf_init_br" 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:44:39.728 Cannot find device "nvmf_init_br2" 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:44:39.728 Cannot find device "nvmf_tgt_br" 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:44:39.728 Cannot find device "nvmf_tgt_br2" 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:44:39.728 Cannot find device "nvmf_init_br" 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:44:39.728 Cannot find device "nvmf_init_br2" 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:44:39.728 Cannot find device "nvmf_tgt_br" 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:44:39.728 Cannot find device "nvmf_tgt_br2" 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:44:39.728 Cannot find device "nvmf_br" 00:44:39.728 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 00:44:39.729 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:44:39.729 Cannot find device "nvmf_init_if" 00:44:39.729 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 00:44:39.729 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:44:39.729 Cannot find device "nvmf_init_if2" 00:44:39.729 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 00:44:39.729 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:44:39.729 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:44:39.729 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 00:44:39.729 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:44:39.729 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:44:39.729 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 00:44:39.729 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:44:39.729 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:44:39.729 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:44:39.729 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:44:39.729 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:44:39.729 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:44:39.729 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:44:39.729 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:44:39.729 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:44:39.729 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:44:39.729 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:44:39.729 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:44:39.729 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:44:39.988 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:44:39.988 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:44:39.988 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:44:39.988 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:44:39.988 20:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:44:39.988 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:44:39.988 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:44:39.988 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:44:39.988 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:44:39.988 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:44:39.988 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:44:39.988 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:44:39.988 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:44:39.988 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:44:39.988 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:44:39.988 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:44:39.988 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:44:39.988 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:44:39.988 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:44:39.988 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:44:39.988 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:44:39.988 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:44:39.988 00:44:39.988 --- 10.0.0.3 ping statistics --- 00:44:39.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:39.988 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:44:39.988 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:44:39.988 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:44:39.988 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.076 ms 00:44:39.988 00:44:39.988 --- 10.0.0.4 ping statistics --- 00:44:39.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:39.988 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:44:39.988 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:44:39.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:39.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:44:39.988 00:44:39.988 --- 10.0.0.1 ping statistics --- 00:44:39.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:39.988 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:44:39.988 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:44:39.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:39.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:44:39.988 00:44:39.988 --- 10.0.0.2 ping statistics --- 00:44:39.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:39.988 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:44:39.989 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:39.989 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # return 0 00:44:39.989 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:44:39.989 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:39.989 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:44:39.989 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:44:39.989 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:39.989 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:44:39.989 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:44:39.989 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:44:39.989 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:44:39.989 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:39.989 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:44:39.989 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=100513 00:44:39.989 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 100513 00:44:39.989 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:44:39.989 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 100513 ']' 00:44:39.989 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:39.989 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:39.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:39.989 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:39.989 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:39.989 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:44:39.989 [2024-10-01 20:43:35.200630] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:44:39.989 [2024-10-01 20:43:35.201880] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:44:39.989 [2024-10-01 20:43:35.201951] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:40.248 [2024-10-01 20:43:35.343678] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:44:40.248 [2024-10-01 20:43:35.413555] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:40.248 [2024-10-01 20:43:35.413621] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:40.248 [2024-10-01 20:43:35.413635] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:40.248 [2024-10-01 20:43:35.413645] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:40.248 [2024-10-01 20:43:35.413654] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:40.248 [2024-10-01 20:43:35.414448] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:44:40.248 [2024-10-01 20:43:35.414507] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:44:40.507 [2024-10-01 20:43:35.528888] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:44:40.507 [2024-10-01 20:43:35.529536] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:44:40.507 [2024-10-01 20:43:35.529607] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:44:40.507 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:40.507 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:44:40.507 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:44:40.507 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:40.507 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:44:40.507 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:40.507 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:40.507 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:40.507 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:44:40.507 [2024-10-01 20:43:35.615372] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:40.507 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:40.507 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:44:40.507 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:40.507 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:44:40.507 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:40.507 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:44:40.507 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:40.507 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:44:40.507 [2024-10-01 20:43:35.635716] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:44:40.507 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:40.507 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:44:40.507 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:40.507 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:44:40.507 NULL1 00:44:40.507 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:40.507 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:44:40.507 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:40.507 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:44:40.507 Delay0 00:44:40.507 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:40.508 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:44:40.508 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:40.508 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:44:40.508 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:40.508 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=100551 00:44:40.508 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:44:40.508 20:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:44:40.766 [2024-10-01 20:43:35.830481] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:44:42.667 20:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:42.667 20:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:42.667 20:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 Write completed with error (sct=0, sc=8) 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 starting I/O failed: -6 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 Write completed with error (sct=0, sc=8) 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 starting I/O failed: -6 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 Write completed with error (sct=0, sc=8) 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 starting I/O failed: -6 00:44:42.667 Write completed with error (sct=0, sc=8) 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 starting I/O failed: -6 00:44:42.667 Write completed with error (sct=0, sc=8) 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 Write completed with error (sct=0, sc=8) 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 starting I/O failed: -6 00:44:42.667 Write completed with error (sct=0, sc=8) 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 starting I/O failed: -6 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 Write completed with error (sct=0, sc=8) 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 starting I/O failed: -6 00:44:42.667 Write completed with error (sct=0, sc=8) 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 starting I/O failed: -6 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 starting I/O failed: -6 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 Write completed with error (sct=0, sc=8) 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 Write completed with error (sct=0, sc=8) 00:44:42.667 starting I/O failed: -6 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 Write completed with error (sct=0, sc=8) 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 starting I/O failed: -6 00:44:42.667 Write completed with error (sct=0, sc=8) 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 starting I/O failed: -6 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 starting I/O failed: -6 00:44:42.667 Write completed with error (sct=0, sc=8) 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 starting I/O failed: -6 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 starting I/O failed: -6 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 starting I/O failed: -6 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 starting I/O failed: -6 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 starting I/O failed: -6 00:44:42.667 Write completed with error (sct=0, sc=8) 00:44:42.667 Write completed with error (sct=0, sc=8) 00:44:42.667 starting I/O failed: -6 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 starting I/O failed: -6 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 Write completed with error (sct=0, sc=8) 00:44:42.667 starting I/O failed: -6 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 Write completed with error (sct=0, sc=8) 00:44:42.667 starting I/O failed: -6 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.667 Write completed with error (sct=0, sc=8) 00:44:42.667 starting I/O failed: -6 00:44:42.667 Read completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 [2024-10-01 20:43:37.867033] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeacb20 is same with the state(6) to be set 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 [2024-10-01 20:43:37.868649] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0df4000c00 is same with the state(6) to be set 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 Write completed with error (sct=0, sc=8) 00:44:42.668 Read completed with error (sct=0, sc=8) 00:44:42.668 starting I/O failed: -6 00:44:42.668 [2024-10-01 20:43:37.870748] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0df400d490 is same with the state(6) to be set 00:44:43.604 [2024-10-01 20:43:38.845720] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea8fb0 is same with the state(6) to be set 00:44:43.863 Read completed with error (sct=0, sc=8) 00:44:43.863 Write completed with error (sct=0, sc=8) 00:44:43.863 Read completed with error (sct=0, sc=8) 00:44:43.863 Read completed with error (sct=0, sc=8) 00:44:43.863 Read completed with error (sct=0, sc=8) 00:44:43.863 Read completed with error (sct=0, sc=8) 00:44:43.863 Read completed with error (sct=0, sc=8) 00:44:43.863 Read completed with error (sct=0, sc=8) 00:44:43.863 Write completed with error (sct=0, sc=8) 00:44:43.863 Write completed with error (sct=0, sc=8) 00:44:43.863 Write completed with error (sct=0, sc=8) 00:44:43.863 Read completed with error (sct=0, sc=8) 00:44:43.863 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 [2024-10-01 20:43:38.867024] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeacd00 is same with the state(6) to be set 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 [2024-10-01 20:43:38.867999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0df400d7c0 is same with the state(6) to be set 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 [2024-10-01 20:43:38.868750] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0df400cfe0 is same with the state(6) to be set 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Write completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 Read completed with error (sct=0, sc=8) 00:44:43.864 [2024-10-01 20:43:38.869484] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead8b0 is same with the state(6) to be set 00:44:43.864 Initializing NVMe Controllers 00:44:43.864 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:44:43.864 Controller IO queue size 128, less than required. 00:44:43.864 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:44:43.864 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:44:43.864 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:44:43.864 Initialization complete. Launching workers. 00:44:43.864 ======================================================== 00:44:43.864 Latency(us) 00:44:43.864 Device Information : IOPS MiB/s Average min max 00:44:43.864 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 189.54 0.09 895414.76 498.50 1013505.73 00:44:43.864 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 178.12 0.09 914152.79 893.43 1014890.65 00:44:43.864 ======================================================== 00:44:43.864 Total : 367.66 0.18 904492.97 498.50 1014890.65 00:44:43.864 00:44:43.864 [2024-10-01 20:43:38.870633] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea8fb0 (9): Bad file descriptor 00:44:43.864 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:44:43.865 20:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:43.865 20:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:44:43.865 20:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 100551 00:44:43.865 20:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:44:44.432 20:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:44:44.432 20:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 100551 00:44:44.432 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (100551) - No such process 00:44:44.432 20:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 100551 00:44:44.432 20:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:44:44.432 20:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 100551 00:44:44.432 20:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:44:44.432 20:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:44:44.432 20:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:44:44.433 20:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:44:44.433 20:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 100551 00:44:44.433 20:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:44:44.433 20:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:44:44.433 20:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:44:44.433 20:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:44:44.433 20:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:44:44.433 20:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:44.433 20:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:44:44.433 20:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:44.433 20:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:44:44.433 20:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:44.433 20:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:44:44.433 [2024-10-01 20:43:39.396393] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:44:44.433 20:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:44.433 20:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:44:44.433 20:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:44.433 20:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:44:44.433 20:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:44.433 20:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=100597 00:44:44.433 20:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:44:44.433 20:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100597 00:44:44.433 20:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:44:44.433 20:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:44:44.433 [2024-10-01 20:43:39.571428] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:44:44.690 20:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:44:44.690 20:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100597 00:44:44.690 20:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:44:45.255 20:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:44:45.255 20:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100597 00:44:45.255 20:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:44:45.818 20:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:44:45.818 20:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100597 00:44:45.818 20:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:44:46.394 20:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:44:46.394 20:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100597 00:44:46.394 20:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:44:46.687 20:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:44:46.687 20:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100597 00:44:46.687 20:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:44:47.265 20:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:44:47.265 20:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100597 00:44:47.265 20:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:44:47.522 Initializing NVMe Controllers 00:44:47.522 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:44:47.522 Controller IO queue size 128, less than required. 00:44:47.522 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:44:47.522 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:44:47.522 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:44:47.522 Initialization complete. Launching workers. 00:44:47.522 ======================================================== 00:44:47.522 Latency(us) 00:44:47.522 Device Information : IOPS MiB/s Average min max 00:44:47.522 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002209.18 1000139.00 1007884.27 00:44:47.522 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003987.31 1000164.14 1009069.54 00:44:47.522 ======================================================== 00:44:47.522 Total : 256.00 0.12 1003098.24 1000139.00 1009069.54 00:44:47.522 00:44:47.780 20:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:44:47.780 20:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100597 00:44:47.780 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (100597) - No such process 00:44:47.780 20:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 100597 00:44:47.780 20:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:44:47.780 20:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:44:47.780 20:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:44:47.780 20:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:44:47.780 20:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:47.780 20:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:44:47.780 20:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:47.780 20:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:47.780 rmmod nvme_tcp 00:44:47.780 rmmod nvme_fabrics 00:44:47.780 rmmod nvme_keyring 00:44:48.038 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:48.038 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:44:48.038 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:44:48.038 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 100513 ']' 00:44:48.038 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 100513 00:44:48.038 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 100513 ']' 00:44:48.038 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 100513 00:44:48.038 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:44:48.038 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:44:48.038 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100513 00:44:48.038 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:44:48.038 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:44:48.038 killing process with pid 100513 00:44:48.038 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100513' 00:44:48.038 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 100513 00:44:48.038 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 100513 00:44:48.038 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:44:48.039 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:44:48.039 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:44:48.039 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:44:48.039 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:44:48.039 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:44:48.039 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:44:48.039 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:48.039 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:44:48.039 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:44:48.296 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:44:48.296 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:44:48.296 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:44:48.296 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:44:48.296 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:44:48.296 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:44:48.296 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:44:48.296 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:44:48.296 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:44:48.296 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:44:48.296 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:44:48.296 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:44:48.296 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:44:48.296 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:48.296 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:48.296 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:48.296 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 00:44:48.296 00:44:48.296 real 0m8.960s 00:44:48.296 user 0m23.943s 00:44:48.296 sys 0m2.510s 00:44:48.296 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:48.296 ************************************ 00:44:48.296 END TEST nvmf_delete_subsystem 00:44:48.296 ************************************ 00:44:48.296 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:44:48.296 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:44:48.296 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:44:48.296 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:48.296 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:44:48.296 ************************************ 00:44:48.296 START TEST nvmf_host_management 00:44:48.296 ************************************ 00:44:48.296 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:44:48.555 * Looking for test storage... 00:44:48.555 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:44:48.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:48.555 --rc genhtml_branch_coverage=1 00:44:48.555 --rc genhtml_function_coverage=1 00:44:48.555 --rc genhtml_legend=1 00:44:48.555 --rc geninfo_all_blocks=1 00:44:48.555 --rc geninfo_unexecuted_blocks=1 00:44:48.555 00:44:48.555 ' 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:44:48.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:48.555 --rc genhtml_branch_coverage=1 00:44:48.555 --rc genhtml_function_coverage=1 00:44:48.555 --rc genhtml_legend=1 00:44:48.555 --rc geninfo_all_blocks=1 00:44:48.555 --rc geninfo_unexecuted_blocks=1 00:44:48.555 00:44:48.555 ' 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:44:48.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:48.555 --rc genhtml_branch_coverage=1 00:44:48.555 --rc genhtml_function_coverage=1 00:44:48.555 --rc genhtml_legend=1 00:44:48.555 --rc geninfo_all_blocks=1 00:44:48.555 --rc geninfo_unexecuted_blocks=1 00:44:48.555 00:44:48.555 ' 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:44:48.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:48.555 --rc genhtml_branch_coverage=1 00:44:48.555 --rc genhtml_function_coverage=1 00:44:48.555 --rc genhtml_legend=1 00:44:48.555 --rc geninfo_all_blocks=1 00:44:48.555 --rc geninfo_unexecuted_blocks=1 00:44:48.555 00:44:48.555 ' 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:48.555 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@458 -- # nvmf_veth_init 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:44:48.556 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:44:48.556 Cannot find device "nvmf_init_br" 00:44:48.557 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:44:48.557 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:44:48.557 Cannot find device "nvmf_init_br2" 00:44:48.557 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:44:48.557 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:44:48.557 Cannot find device "nvmf_tgt_br" 00:44:48.557 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:44:48.557 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:44:48.557 Cannot find device "nvmf_tgt_br2" 00:44:48.557 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:44:48.557 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:44:48.815 Cannot find device "nvmf_init_br" 00:44:48.815 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:44:48.815 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:44:48.815 Cannot find device "nvmf_init_br2" 00:44:48.815 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:44:48.815 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:44:48.815 Cannot find device "nvmf_tgt_br" 00:44:48.815 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:44:48.815 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:44:48.815 Cannot find device "nvmf_tgt_br2" 00:44:48.815 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:44:48.815 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:44:48.815 Cannot find device "nvmf_br" 00:44:48.815 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:44:48.815 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:44:48.815 Cannot find device "nvmf_init_if" 00:44:48.815 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:44:48.815 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:44:48.816 Cannot find device "nvmf_init_if2" 00:44:48.816 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:44:48.816 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:44:48.816 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:44:48.816 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:44:48.816 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:44:48.816 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:44:48.816 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:44:48.816 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:44:48.816 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:44:48.816 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:44:48.816 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:44:48.816 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:44:48.816 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:44:48.816 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:44:48.816 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:44:48.816 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:44:48.816 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:44:48.816 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:44:48.816 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:44:48.816 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:44:48.816 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:44:48.816 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:44:48.816 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:44:48.816 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:44:48.816 20:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:44:48.816 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:44:48.816 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:44:48.816 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:44:48.816 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:44:48.816 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:44:48.816 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:44:48.816 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:44:49.074 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:44:49.074 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:44:49.074 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:44:49.074 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:44:49.074 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:44:49.074 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:44:49.074 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:44:49.074 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:44:49.074 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:44:49.074 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:44:49.074 00:44:49.074 --- 10.0.0.3 ping statistics --- 00:44:49.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:49.074 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:44:49.074 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:44:49.074 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:44:49.074 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:44:49.074 00:44:49.074 --- 10.0.0.4 ping statistics --- 00:44:49.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:49.074 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:44:49.074 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:44:49.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:49.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:44:49.074 00:44:49.074 --- 10.0.0.1 ping statistics --- 00:44:49.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:49.074 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:44:49.074 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:44:49.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:49.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:44:49.074 00:44:49.074 --- 10.0.0.2 ping statistics --- 00:44:49.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:49.074 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:44:49.074 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:49.074 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@459 -- # return 0 00:44:49.074 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:44:49.074 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:49.074 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:44:49.074 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:44:49.074 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:49.074 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:44:49.074 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:44:49.074 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:44:49.074 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:44:49.074 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:44:49.074 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:44:49.074 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:49.074 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:44:49.074 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=100873 00:44:49.074 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 100873 00:44:49.074 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:44:49.074 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 100873 ']' 00:44:49.074 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:49.074 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:49.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:49.074 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:49.074 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:49.074 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:44:49.074 [2024-10-01 20:43:44.205572] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:44:49.074 [2024-10-01 20:43:44.206841] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:44:49.074 [2024-10-01 20:43:44.206907] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:49.333 [2024-10-01 20:43:44.345202] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:49.333 [2024-10-01 20:43:44.394809] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:49.333 [2024-10-01 20:43:44.394877] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:49.333 [2024-10-01 20:43:44.394887] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:49.333 [2024-10-01 20:43:44.394894] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:49.333 [2024-10-01 20:43:44.394900] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:49.333 [2024-10-01 20:43:44.395066] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:44:49.333 [2024-10-01 20:43:44.395224] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:44:49.333 [2024-10-01 20:43:44.395291] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:44:49.333 [2024-10-01 20:43:44.395292] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:44:49.333 [2024-10-01 20:43:44.479917] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:44:49.333 [2024-10-01 20:43:44.480343] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:44:49.333 [2024-10-01 20:43:44.480645] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:44:49.333 [2024-10-01 20:43:44.480825] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:44:49.333 [2024-10-01 20:43:44.480962] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:44:49.333 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:49.334 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:44:49.334 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:44:49.334 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:49.334 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:44:49.334 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:49.334 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:49.334 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:49.334 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:44:49.334 [2024-10-01 20:43:44.564384] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:49.334 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:49.334 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:44:49.334 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:49.334 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:44:49.593 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:44:49.593 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:44:49.593 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:44:49.593 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:49.593 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:44:49.593 Malloc0 00:44:49.593 [2024-10-01 20:43:44.636382] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:44:49.593 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:49.593 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:44:49.593 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:49.593 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:44:49.593 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=100936 00:44:49.593 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 100936 /var/tmp/bdevperf.sock 00:44:49.593 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 100936 ']' 00:44:49.593 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:44:49.593 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:49.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:44:49.593 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:44:49.593 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:49.593 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:44:49.593 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:44:49.593 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:44:49.593 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:44:49.593 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:44:49.593 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:44:49.593 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:44:49.593 { 00:44:49.593 "params": { 00:44:49.593 "name": "Nvme$subsystem", 00:44:49.593 "trtype": "$TEST_TRANSPORT", 00:44:49.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:49.593 "adrfam": "ipv4", 00:44:49.593 "trsvcid": "$NVMF_PORT", 00:44:49.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:49.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:49.593 "hdgst": ${hdgst:-false}, 00:44:49.593 "ddgst": ${ddgst:-false} 00:44:49.593 }, 00:44:49.593 "method": "bdev_nvme_attach_controller" 00:44:49.593 } 00:44:49.593 EOF 00:44:49.593 )") 00:44:49.593 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:44:49.593 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:44:49.593 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:44:49.593 20:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:44:49.593 "params": { 00:44:49.593 "name": "Nvme0", 00:44:49.593 "trtype": "tcp", 00:44:49.593 "traddr": "10.0.0.3", 00:44:49.593 "adrfam": "ipv4", 00:44:49.593 "trsvcid": "4420", 00:44:49.593 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:49.593 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:49.593 "hdgst": false, 00:44:49.593 "ddgst": false 00:44:49.593 }, 00:44:49.593 "method": "bdev_nvme_attach_controller" 00:44:49.593 }' 00:44:49.593 [2024-10-01 20:43:44.745740] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:44:49.593 [2024-10-01 20:43:44.745846] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100936 ] 00:44:49.852 [2024-10-01 20:43:44.887946] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:49.852 [2024-10-01 20:43:44.956826] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:44:50.110 Running I/O for 10 seconds... 00:44:50.679 20:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:50.679 20:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:44:50.679 20:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:44:50.679 20:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:50.679 20:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:44:50.679 20:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:50.679 20:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:44:50.679 20:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:44:50.679 20:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:44:50.679 20:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:44:50.679 20:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:44:50.679 20:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:44:50.679 20:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:44:50.679 20:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:44:50.679 20:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:44:50.679 20:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:44:50.679 20:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:50.679 20:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:44:50.679 20:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:50.679 20:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=963 00:44:50.679 20:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 963 -ge 100 ']' 00:44:50.679 20:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:44:50.679 20:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:44:50.679 20:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:44:50.679 20:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:44:50.679 20:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:50.679 20:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:44:50.679 [2024-10-01 20:43:45.816109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe49180 is same with the state(6) to be set 00:44:50.679 [2024-10-01 20:43:45.816176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe49180 is same with the state(6) to be set 00:44:50.679 [2024-10-01 20:43:45.816205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe49180 is same with the state(6) to be set 00:44:50.679 [2024-10-01 20:43:45.818877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:44:50.679 [2024-10-01 20:43:45.818919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.679 [2024-10-01 20:43:45.818950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:44:50.679 [2024-10-01 20:43:45.818959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.679 [2024-10-01 20:43:45.818968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:44:50.679 [2024-10-01 20:43:45.818977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.679 [2024-10-01 20:43:45.818986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:44:50.680 [2024-10-01 20:43:45.818995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.680 [2024-10-01 20:43:45.819004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac78d0 is same with the state(6) to be set 00:44:50.680 20:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:50.680 20:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:44:50.680 20:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:50.680 20:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:44:50.680 [2024-10-01 20:43:45.828375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.680 [2024-10-01 20:43:45.828412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.680 [2024-10-01 20:43:45.828448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.680 [2024-10-01 20:43:45.828474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.680 [2024-10-01 20:43:45.828486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.680 [2024-10-01 20:43:45.828495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.680 [2024-10-01 20:43:45.828506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.680 [2024-10-01 20:43:45.828514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.680 [2024-10-01 20:43:45.828524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.680 [2024-10-01 20:43:45.828533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.680 [2024-10-01 20:43:45.828543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.680 [2024-10-01 20:43:45.828568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.680 [2024-10-01 20:43:45.828578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.680 [2024-10-01 20:43:45.828587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.680 [2024-10-01 20:43:45.828598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.680 [2024-10-01 20:43:45.828606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.680 [2024-10-01 20:43:45.828617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.680 [2024-10-01 20:43:45.828627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.680 [2024-10-01 20:43:45.828637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.680 [2024-10-01 20:43:45.828646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.680 [2024-10-01 20:43:45.828657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.680 [2024-10-01 20:43:45.828665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.680 [2024-10-01 20:43:45.828676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.680 [2024-10-01 20:43:45.828685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.680 [2024-10-01 20:43:45.828695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.680 [2024-10-01 20:43:45.828704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.680 [2024-10-01 20:43:45.828715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.680 [2024-10-01 20:43:45.828723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.680 [2024-10-01 20:43:45.828734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.680 [2024-10-01 20:43:45.828742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.680 [2024-10-01 20:43:45.828753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.680 [2024-10-01 20:43:45.828762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.680 [2024-10-01 20:43:45.828773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.680 [2024-10-01 20:43:45.828782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.680 [2024-10-01 20:43:45.828793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.680 [2024-10-01 20:43:45.828802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.680 [2024-10-01 20:43:45.828814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.680 [2024-10-01 20:43:45.828823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.680 [2024-10-01 20:43:45.828834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.680 [2024-10-01 20:43:45.828843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.680 [2024-10-01 20:43:45.828854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.680 [2024-10-01 20:43:45.828863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.680 [2024-10-01 20:43:45.828874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.680 [2024-10-01 20:43:45.828883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.680 [2024-10-01 20:43:45.828894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.680 [2024-10-01 20:43:45.828903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.680 [2024-10-01 20:43:45.828913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.680 [2024-10-01 20:43:45.828922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.680 [2024-10-01 20:43:45.828933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.680 [2024-10-01 20:43:45.828942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.680 [2024-10-01 20:43:45.828952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.680 [2024-10-01 20:43:45.828961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.680 [2024-10-01 20:43:45.828972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.680 [2024-10-01 20:43:45.828981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.680 [2024-10-01 20:43:45.828992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.680 [2024-10-01 20:43:45.829001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.680 [2024-10-01 20:43:45.829011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.680 [2024-10-01 20:43:45.829020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.680 [2024-10-01 20:43:45.829030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.680 [2024-10-01 20:43:45.829039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.680 [2024-10-01 20:43:45.829049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.680 [2024-10-01 20:43:45.829058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.681 [2024-10-01 20:43:45.829069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.681 [2024-10-01 20:43:45.829078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.681 [2024-10-01 20:43:45.829088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.681 [2024-10-01 20:43:45.829097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.681 [2024-10-01 20:43:45.829108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.681 [2024-10-01 20:43:45.829133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.681 [2024-10-01 20:43:45.829145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.681 [2024-10-01 20:43:45.829154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.681 [2024-10-01 20:43:45.829165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.681 [2024-10-01 20:43:45.829189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.681 [2024-10-01 20:43:45.829201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.681 [2024-10-01 20:43:45.829211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.681 [2024-10-01 20:43:45.829222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.681 [2024-10-01 20:43:45.829232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.681 [2024-10-01 20:43:45.829243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.681 [2024-10-01 20:43:45.829252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.681 [2024-10-01 20:43:45.829263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.681 [2024-10-01 20:43:45.829272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.681 [2024-10-01 20:43:45.829283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.681 [2024-10-01 20:43:45.829293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.681 [2024-10-01 20:43:45.829304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.681 [2024-10-01 20:43:45.829313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.681 [2024-10-01 20:43:45.829325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.681 [2024-10-01 20:43:45.829334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.681 [2024-10-01 20:43:45.829345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.681 [2024-10-01 20:43:45.829355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.681 [2024-10-01 20:43:45.829366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.681 [2024-10-01 20:43:45.829375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.681 [2024-10-01 20:43:45.829386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.681 [2024-10-01 20:43:45.829395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.681 [2024-10-01 20:43:45.829406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.681 [2024-10-01 20:43:45.829415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.681 [2024-10-01 20:43:45.829426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.681 [2024-10-01 20:43:45.829435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.681 [2024-10-01 20:43:45.829446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.681 [2024-10-01 20:43:45.829455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.681 [2024-10-01 20:43:45.829467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.681 [2024-10-01 20:43:45.829476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.681 [2024-10-01 20:43:45.829488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.681 [2024-10-01 20:43:45.829513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.681 [2024-10-01 20:43:45.829524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.681 [2024-10-01 20:43:45.829533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.681 [2024-10-01 20:43:45.829544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.681 [2024-10-01 20:43:45.829553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.681 [2024-10-01 20:43:45.829564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.681 [2024-10-01 20:43:45.829573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.681 [2024-10-01 20:43:45.829584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.681 [2024-10-01 20:43:45.829592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.681 [2024-10-01 20:43:45.829603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.681 [2024-10-01 20:43:45.829612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.681 [2024-10-01 20:43:45.829623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.681 [2024-10-01 20:43:45.829632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.681 [2024-10-01 20:43:45.829649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.681 [2024-10-01 20:43:45.829658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.681 [2024-10-01 20:43:45.829669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.681 [2024-10-01 20:43:45.829677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.681 [2024-10-01 20:43:45.829688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.681 [2024-10-01 20:43:45.829697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.681 [2024-10-01 20:43:45.829708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.681 [2024-10-01 20:43:45.829717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.681 [2024-10-01 20:43:45.829728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.681 [2024-10-01 20:43:45.829736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.681 [2024-10-01 20:43:45.829747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.681 [2024-10-01 20:43:45.829756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.681 [2024-10-01 20:43:45.829767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:50.681 [2024-10-01 20:43:45.829775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:50.682 [2024-10-01 20:43:45.829833] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xaccd50 was disconnected and freed. reset controller. 00:44:50.682 [2024-10-01 20:43:45.829871] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac78d0 (9): Bad file descriptor 00:44:50.682 [2024-10-01 20:43:45.830998] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:44:50.682 task offset: 8192 on job bdev=Nvme0n1 fails 00:44:50.682 00:44:50.682 Latency(us) 00:44:50.682 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:50.682 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:44:50.682 Job: Nvme0n1 ended in about 0.68 seconds with error 00:44:50.682 Verification LBA range: start 0x0 length 0x400 00:44:50.682 Nvme0n1 : 0.68 1592.42 99.53 93.67 0.00 36873.11 1876.71 40751.48 00:44:50.682 =================================================================================================================== 00:44:50.682 Total : 1592.42 99.53 93.67 0.00 36873.11 1876.71 40751.48 00:44:50.682 20:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:50.682 20:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:44:50.682 [2024-10-01 20:43:45.833044] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:44:50.682 [2024-10-01 20:43:45.835870] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:44:51.617 20:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 100936 00:44:51.617 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (100936) - No such process 00:44:51.617 20:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:44:51.617 20:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:44:51.617 20:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:44:51.617 20:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:44:51.617 20:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:44:51.617 20:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:44:51.617 20:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:44:51.617 20:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:44:51.617 { 00:44:51.617 "params": { 00:44:51.617 "name": "Nvme$subsystem", 00:44:51.617 "trtype": "$TEST_TRANSPORT", 00:44:51.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:51.617 "adrfam": "ipv4", 00:44:51.617 "trsvcid": "$NVMF_PORT", 00:44:51.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:51.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:51.617 "hdgst": ${hdgst:-false}, 00:44:51.617 "ddgst": ${ddgst:-false} 00:44:51.617 }, 00:44:51.617 "method": "bdev_nvme_attach_controller" 00:44:51.617 } 00:44:51.617 EOF 00:44:51.617 )") 00:44:51.617 20:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:44:51.618 20:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:44:51.618 20:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:44:51.618 20:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:44:51.618 "params": { 00:44:51.618 "name": "Nvme0", 00:44:51.618 "trtype": "tcp", 00:44:51.618 "traddr": "10.0.0.3", 00:44:51.618 "adrfam": "ipv4", 00:44:51.618 "trsvcid": "4420", 00:44:51.618 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:51.618 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:51.618 "hdgst": false, 00:44:51.618 "ddgst": false 00:44:51.618 }, 00:44:51.618 "method": "bdev_nvme_attach_controller" 00:44:51.618 }' 00:44:51.876 [2024-10-01 20:43:46.889556] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:44:51.876 [2024-10-01 20:43:46.889659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100982 ] 00:44:51.876 [2024-10-01 20:43:47.021698] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:51.876 [2024-10-01 20:43:47.074522] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:44:52.134 Running I/O for 1 seconds... 00:44:53.068 1704.00 IOPS, 106.50 MiB/s 00:44:53.068 Latency(us) 00:44:53.068 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:53.068 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:44:53.068 Verification LBA range: start 0x0 length 0x400 00:44:53.068 Nvme0n1 : 1.03 1733.12 108.32 0.00 0.00 36236.23 4617.31 33602.09 00:44:53.068 =================================================================================================================== 00:44:53.068 Total : 1733.12 108.32 0.00 0.00 36236.23 4617.31 33602.09 00:44:53.327 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:44:53.327 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:44:53.327 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:44:53.327 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:44:53.327 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:44:53.327 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:44:53.327 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:44:53.327 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:53.327 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:44:53.327 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:53.327 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:53.327 rmmod nvme_tcp 00:44:53.586 rmmod nvme_fabrics 00:44:53.586 rmmod nvme_keyring 00:44:53.586 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:53.586 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:44:53.586 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:44:53.586 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 100873 ']' 00:44:53.586 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 100873 00:44:53.586 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 100873 ']' 00:44:53.586 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 100873 00:44:53.586 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:44:53.586 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:44:53.586 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100873 00:44:53.586 killing process with pid 100873 00:44:53.586 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:44:53.586 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:44:53.586 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100873' 00:44:53.586 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 100873 00:44:53.586 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 100873 00:44:53.845 [2024-10-01 20:43:48.846242] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:44:53.845 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:44:53.845 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:44:53.845 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:44:53.845 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:44:53.845 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:44:53.845 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:44:53.845 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:44:53.845 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:53.845 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:44:53.846 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:44:53.846 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:44:53.846 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:44:53.846 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:44:53.846 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:44:53.846 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:44:53.846 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:44:53.846 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:44:53.846 20:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:44:53.846 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:44:53.846 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:44:53.846 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:44:53.846 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:44:53.846 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:44:53.846 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:53.846 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:53.846 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:54.104 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:44:54.104 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:44:54.104 00:44:54.104 real 0m5.574s 00:44:54.104 user 0m18.327s 00:44:54.104 sys 0m2.542s 00:44:54.104 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:54.104 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:44:54.104 ************************************ 00:44:54.104 END TEST nvmf_host_management 00:44:54.104 ************************************ 00:44:54.104 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:44:54.105 ************************************ 00:44:54.105 START TEST nvmf_lvol 00:44:54.105 ************************************ 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:44:54.105 * Looking for test storage... 00:44:54.105 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:44:54.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:54.105 --rc genhtml_branch_coverage=1 00:44:54.105 --rc genhtml_function_coverage=1 00:44:54.105 --rc genhtml_legend=1 00:44:54.105 --rc geninfo_all_blocks=1 00:44:54.105 --rc geninfo_unexecuted_blocks=1 00:44:54.105 00:44:54.105 ' 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:44:54.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:54.105 --rc genhtml_branch_coverage=1 00:44:54.105 --rc genhtml_function_coverage=1 00:44:54.105 --rc genhtml_legend=1 00:44:54.105 --rc geninfo_all_blocks=1 00:44:54.105 --rc geninfo_unexecuted_blocks=1 00:44:54.105 00:44:54.105 ' 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:44:54.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:54.105 --rc genhtml_branch_coverage=1 00:44:54.105 --rc genhtml_function_coverage=1 00:44:54.105 --rc genhtml_legend=1 00:44:54.105 --rc geninfo_all_blocks=1 00:44:54.105 --rc geninfo_unexecuted_blocks=1 00:44:54.105 00:44:54.105 ' 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:44:54.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:54.105 --rc genhtml_branch_coverage=1 00:44:54.105 --rc genhtml_function_coverage=1 00:44:54.105 --rc genhtml_legend=1 00:44:54.105 --rc geninfo_all_blocks=1 00:44:54.105 --rc geninfo_unexecuted_blocks=1 00:44:54.105 00:44:54.105 ' 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:44:54.105 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:44:54.365 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@458 -- # nvmf_veth_init 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:44:54.366 Cannot find device "nvmf_init_br" 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:44:54.366 Cannot find device "nvmf_init_br2" 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:44:54.366 Cannot find device "nvmf_tgt_br" 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:44:54.366 Cannot find device "nvmf_tgt_br2" 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:44:54.366 Cannot find device "nvmf_init_br" 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:44:54.366 Cannot find device "nvmf_init_br2" 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:44:54.366 Cannot find device "nvmf_tgt_br" 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:44:54.366 Cannot find device "nvmf_tgt_br2" 00:44:54.366 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:44:54.367 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:44:54.367 Cannot find device "nvmf_br" 00:44:54.367 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:44:54.367 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:44:54.367 Cannot find device "nvmf_init_if" 00:44:54.367 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:44:54.367 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:44:54.367 Cannot find device "nvmf_init_if2" 00:44:54.367 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:44:54.367 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:44:54.367 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:44:54.367 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:44:54.367 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:44:54.367 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:44:54.367 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:44:54.367 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:44:54.367 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:44:54.367 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:44:54.367 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:44:54.367 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:44:54.367 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:44:54.367 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:44:54.367 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:44:54.367 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:44:54.367 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:44:54.367 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:44:54.367 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:44:54.367 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:44:54.367 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:44:54.367 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:44:54.367 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:44:54.367 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:44:54.367 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:44:54.627 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:44:54.627 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:44:54.627 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:44:54.627 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:44:54.627 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:44:54.627 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:44:54.627 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:44:54.627 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:44:54.627 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:44:54.627 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:44:54.627 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:44:54.627 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:44:54.627 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:44:54.627 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:44:54.627 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:44:54.627 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:44:54.627 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:44:54.627 00:44:54.627 --- 10.0.0.3 ping statistics --- 00:44:54.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:54.627 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:44:54.627 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:44:54.627 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:44:54.627 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:44:54.627 00:44:54.627 --- 10.0.0.4 ping statistics --- 00:44:54.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:54.627 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:44:54.627 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:44:54.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:54.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:44:54.627 00:44:54.627 --- 10.0.0.1 ping statistics --- 00:44:54.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:54.627 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:44:54.627 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:44:54.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:54.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:44:54.627 00:44:54.627 --- 10.0.0.2 ping statistics --- 00:44:54.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:54.627 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:44:54.627 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:54.627 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@459 -- # return 0 00:44:54.627 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:44:54.627 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:54.627 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:44:54.627 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:44:54.627 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:54.627 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:44:54.627 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:44:54.627 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:44:54.627 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:44:54.627 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:54.627 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:44:54.627 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=101244 00:44:54.627 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:44:54.627 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 101244 00:44:54.627 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 101244 ']' 00:44:54.627 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:54.627 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:54.627 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:54.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:54.627 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:54.627 20:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:44:54.627 [2024-10-01 20:43:49.842634] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:44:54.627 [2024-10-01 20:43:49.844565] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:44:54.627 [2024-10-01 20:43:49.844681] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:54.886 [2024-10-01 20:43:49.996104] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:44:54.886 [2024-10-01 20:43:50.070025] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:54.887 [2024-10-01 20:43:50.070095] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:54.887 [2024-10-01 20:43:50.070126] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:54.887 [2024-10-01 20:43:50.070138] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:54.887 [2024-10-01 20:43:50.070148] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:54.887 [2024-10-01 20:43:50.070330] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:44:54.887 [2024-10-01 20:43:50.070387] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:44:54.887 [2024-10-01 20:43:50.070396] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:44:55.146 [2024-10-01 20:43:50.199213] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:44:55.146 [2024-10-01 20:43:50.199948] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:44:55.146 [2024-10-01 20:43:50.200175] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:44:55.146 [2024-10-01 20:43:50.200274] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:44:55.146 20:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:55.146 20:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:44:55.146 20:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:44:55.146 20:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:55.146 20:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:44:55.146 20:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:55.146 20:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:44:55.405 [2024-10-01 20:43:50.559321] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:55.405 20:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:44:55.663 20:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:44:55.663 20:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:44:55.923 20:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:44:55.923 20:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:44:56.490 20:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:44:56.750 20:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=93309da9-2c54-475d-a2ca-8fa426928211 00:44:56.750 20:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 93309da9-2c54-475d-a2ca-8fa426928211 lvol 20 00:44:57.009 20:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=972a7d33-1189-4104-bbec-3cf11f1aa64d 00:44:57.009 20:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:44:57.269 20:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 972a7d33-1189-4104-bbec-3cf11f1aa64d 00:44:57.528 20:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:44:57.787 [2024-10-01 20:43:52.811188] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:44:57.787 20:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:44:58.047 20:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:44:58.047 20:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=101378 00:44:58.047 20:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:44:58.984 20:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 972a7d33-1189-4104-bbec-3cf11f1aa64d MY_SNAPSHOT 00:44:59.243 20:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=43646bd8-15c6-4b57-a7f9-266b05a5b6de 00:44:59.243 20:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 972a7d33-1189-4104-bbec-3cf11f1aa64d 30 00:44:59.502 20:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 43646bd8-15c6-4b57-a7f9-266b05a5b6de MY_CLONE 00:45:00.069 20:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=77c563e5-b639-4e2a-9a27-d726e1abc506 00:45:00.069 20:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 77c563e5-b639-4e2a-9a27-d726e1abc506 00:45:00.652 20:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 101378 00:45:08.766 Initializing NVMe Controllers 00:45:08.766 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:45:08.766 Controller IO queue size 128, less than required. 00:45:08.766 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:45:08.766 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:45:08.766 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:45:08.766 Initialization complete. Launching workers. 00:45:08.766 ======================================================== 00:45:08.766 Latency(us) 00:45:08.766 Device Information : IOPS MiB/s Average min max 00:45:08.766 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11147.40 43.54 11482.77 5226.73 67933.21 00:45:08.766 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11307.80 44.17 11318.99 1292.62 81791.75 00:45:08.766 ======================================================== 00:45:08.766 Total : 22455.20 87.72 11400.29 1292.62 81791.75 00:45:08.766 00:45:08.766 20:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:08.766 20:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 972a7d33-1189-4104-bbec-3cf11f1aa64d 00:45:08.766 20:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 93309da9-2c54-475d-a2ca-8fa426928211 00:45:09.025 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:45:09.025 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:45:09.025 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:45:09.025 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:45:09.025 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:45:09.025 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:09.025 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:45:09.025 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:09.025 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:09.025 rmmod nvme_tcp 00:45:09.025 rmmod nvme_fabrics 00:45:09.025 rmmod nvme_keyring 00:45:09.026 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:09.026 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:45:09.026 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:45:09.026 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 101244 ']' 00:45:09.026 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 101244 00:45:09.026 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 101244 ']' 00:45:09.026 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 101244 00:45:09.026 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:45:09.026 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:09.026 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 101244 00:45:09.026 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:45:09.026 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:45:09.026 killing process with pid 101244 00:45:09.026 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 101244' 00:45:09.026 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 101244 00:45:09.026 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 101244 00:45:09.285 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:45:09.285 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:45:09.285 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:45:09.285 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:45:09.285 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:45:09.285 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:45:09.285 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:45:09.285 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:09.285 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:45:09.285 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:45:09.285 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:45:09.285 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:45:09.285 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:45:09.285 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:45:09.285 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:45:09.285 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:45:09.285 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:45:09.285 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:45:09.544 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:45:09.544 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:45:09.544 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:45:09.544 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:45:09.544 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:45:09.544 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:09.544 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:09.544 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:09.544 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:45:09.544 00:45:09.544 real 0m15.490s 00:45:09.544 user 0m55.727s 00:45:09.544 sys 0m5.568s 00:45:09.544 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:09.544 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:45:09.544 ************************************ 00:45:09.544 END TEST nvmf_lvol 00:45:09.544 ************************************ 00:45:09.544 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:45:09.544 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:45:09.544 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:09.544 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:45:09.544 ************************************ 00:45:09.544 START TEST nvmf_lvs_grow 00:45:09.544 ************************************ 00:45:09.544 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:45:09.813 * Looking for test storage... 00:45:09.813 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:45:09.813 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:45:09.813 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:45:09.813 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:45:09.813 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:45:09.813 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:45:09.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:09.814 --rc genhtml_branch_coverage=1 00:45:09.814 --rc genhtml_function_coverage=1 00:45:09.814 --rc genhtml_legend=1 00:45:09.814 --rc geninfo_all_blocks=1 00:45:09.814 --rc geninfo_unexecuted_blocks=1 00:45:09.814 00:45:09.814 ' 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:45:09.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:09.814 --rc genhtml_branch_coverage=1 00:45:09.814 --rc genhtml_function_coverage=1 00:45:09.814 --rc genhtml_legend=1 00:45:09.814 --rc geninfo_all_blocks=1 00:45:09.814 --rc geninfo_unexecuted_blocks=1 00:45:09.814 00:45:09.814 ' 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:45:09.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:09.814 --rc genhtml_branch_coverage=1 00:45:09.814 --rc genhtml_function_coverage=1 00:45:09.814 --rc genhtml_legend=1 00:45:09.814 --rc geninfo_all_blocks=1 00:45:09.814 --rc geninfo_unexecuted_blocks=1 00:45:09.814 00:45:09.814 ' 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:45:09.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:09.814 --rc genhtml_branch_coverage=1 00:45:09.814 --rc genhtml_function_coverage=1 00:45:09.814 --rc genhtml_legend=1 00:45:09.814 --rc geninfo_all_blocks=1 00:45:09.814 --rc geninfo_unexecuted_blocks=1 00:45:09.814 00:45:09.814 ' 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:09.814 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@458 -- # nvmf_veth_init 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:45:09.815 Cannot find device "nvmf_init_br" 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:45:09.815 Cannot find device "nvmf_init_br2" 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:45:09.815 Cannot find device "nvmf_tgt_br" 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:45:09.815 Cannot find device "nvmf_tgt_br2" 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:45:09.815 Cannot find device "nvmf_init_br" 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:45:09.815 20:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:45:09.815 Cannot find device "nvmf_init_br2" 00:45:09.815 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:45:09.815 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:45:09.815 Cannot find device "nvmf_tgt_br" 00:45:09.815 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:45:09.815 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:45:09.815 Cannot find device "nvmf_tgt_br2" 00:45:09.815 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:45:09.815 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:45:09.815 Cannot find device "nvmf_br" 00:45:09.815 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:45:09.815 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:45:09.815 Cannot find device "nvmf_init_if" 00:45:09.815 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:45:09.815 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:45:10.075 Cannot find device "nvmf_init_if2" 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:45:10.075 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:45:10.075 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:45:10.075 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:45:10.075 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:45:10.075 00:45:10.075 --- 10.0.0.3 ping statistics --- 00:45:10.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:10.075 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:45:10.075 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:45:10.075 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:45:10.075 00:45:10.075 --- 10.0.0.4 ping statistics --- 00:45:10.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:10.075 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:45:10.075 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:10.075 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:45:10.075 00:45:10.075 --- 10.0.0.1 ping statistics --- 00:45:10.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:10.075 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:45:10.075 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:10.075 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:45:10.075 00:45:10.075 --- 10.0.0.2 ping statistics --- 00:45:10.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:10.075 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@459 -- # return 0 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=101793 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 101793 00:45:10.075 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:45:10.076 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 101793 ']' 00:45:10.076 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:10.076 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:10.076 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:10.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:10.076 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:10.076 20:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:45:10.335 [2024-10-01 20:44:05.375550] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:45:10.335 [2024-10-01 20:44:05.376621] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:45:10.335 [2024-10-01 20:44:05.376708] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:10.335 [2024-10-01 20:44:05.500753] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:10.335 [2024-10-01 20:44:05.553482] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:10.335 [2024-10-01 20:44:05.553561] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:10.335 [2024-10-01 20:44:05.553587] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:10.335 [2024-10-01 20:44:05.553593] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:10.335 [2024-10-01 20:44:05.553599] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:10.335 [2024-10-01 20:44:05.553639] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:45:10.594 [2024-10-01 20:44:05.640017] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:45:10.594 [2024-10-01 20:44:05.640337] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:45:11.163 20:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:11.163 20:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:45:11.163 20:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:45:11.163 20:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:11.163 20:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:45:11.163 20:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:11.163 20:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:45:11.422 [2024-10-01 20:44:06.590447] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:11.422 20:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:45:11.422 20:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:45:11.422 20:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:11.422 20:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:45:11.422 ************************************ 00:45:11.423 START TEST lvs_grow_clean 00:45:11.423 ************************************ 00:45:11.423 20:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:45:11.423 20:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:45:11.423 20:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:45:11.423 20:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:45:11.423 20:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:45:11.423 20:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:45:11.423 20:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:45:11.423 20:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:45:11.423 20:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:45:11.423 20:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:45:11.991 20:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:45:11.991 20:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:45:12.250 20:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=32a4294c-d654-49d2-8719-f57b741ffe38 00:45:12.250 20:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32a4294c-d654-49d2-8719-f57b741ffe38 00:45:12.250 20:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:45:12.250 20:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:45:12.250 20:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:45:12.250 20:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 32a4294c-d654-49d2-8719-f57b741ffe38 lvol 150 00:45:12.509 20:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=abad1970-2e87-4ed2-a06b-b14b0ccd842c 00:45:12.510 20:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:45:12.510 20:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:45:12.768 [2024-10-01 20:44:07.998259] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:45:12.768 [2024-10-01 20:44:07.998417] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:45:12.768 true 00:45:12.768 20:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32a4294c-d654-49d2-8719-f57b741ffe38 00:45:12.768 20:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:45:13.336 20:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:45:13.336 20:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:45:13.336 20:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 abad1970-2e87-4ed2-a06b-b14b0ccd842c 00:45:13.596 20:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:45:13.855 [2024-10-01 20:44:09.038711] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:45:13.855 20:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:45:14.114 20:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=101954 00:45:14.114 20:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:45:14.114 20:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:45:14.114 20:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 101954 /var/tmp/bdevperf.sock 00:45:14.114 20:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 101954 ']' 00:45:14.114 20:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:45:14.114 20:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:14.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:45:14.114 20:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:45:14.114 20:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:14.114 20:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:45:14.373 [2024-10-01 20:44:09.395353] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:45:14.373 [2024-10-01 20:44:09.395454] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101954 ] 00:45:14.373 [2024-10-01 20:44:09.524961] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:14.373 [2024-10-01 20:44:09.577523] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:45:15.311 20:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:15.311 20:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:45:15.311 20:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:45:15.571 Nvme0n1 00:45:15.571 20:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:45:15.830 [ 00:45:15.830 { 00:45:15.830 "aliases": [ 00:45:15.830 "abad1970-2e87-4ed2-a06b-b14b0ccd842c" 00:45:15.830 ], 00:45:15.830 "assigned_rate_limits": { 00:45:15.830 "r_mbytes_per_sec": 0, 00:45:15.830 "rw_ios_per_sec": 0, 00:45:15.831 "rw_mbytes_per_sec": 0, 00:45:15.831 "w_mbytes_per_sec": 0 00:45:15.831 }, 00:45:15.831 "block_size": 4096, 00:45:15.831 "claimed": false, 00:45:15.831 "driver_specific": { 00:45:15.831 "mp_policy": "active_passive", 00:45:15.831 "nvme": [ 00:45:15.831 { 00:45:15.831 "ctrlr_data": { 00:45:15.831 "ana_reporting": false, 00:45:15.831 "cntlid": 1, 00:45:15.831 "firmware_revision": "25.01", 00:45:15.831 "model_number": "SPDK bdev Controller", 00:45:15.831 "multi_ctrlr": true, 00:45:15.831 "oacs": { 00:45:15.831 "firmware": 0, 00:45:15.831 "format": 0, 00:45:15.831 "ns_manage": 0, 00:45:15.831 "security": 0 00:45:15.831 }, 00:45:15.831 "serial_number": "SPDK0", 00:45:15.831 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:15.831 "vendor_id": "0x8086" 00:45:15.831 }, 00:45:15.831 "ns_data": { 00:45:15.831 "can_share": true, 00:45:15.831 "id": 1 00:45:15.831 }, 00:45:15.831 "trid": { 00:45:15.831 "adrfam": "IPv4", 00:45:15.831 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:15.831 "traddr": "10.0.0.3", 00:45:15.831 "trsvcid": "4420", 00:45:15.831 "trtype": "TCP" 00:45:15.831 }, 00:45:15.831 "vs": { 00:45:15.831 "nvme_version": "1.3" 00:45:15.831 } 00:45:15.831 } 00:45:15.831 ] 00:45:15.831 }, 00:45:15.831 "memory_domains": [ 00:45:15.831 { 00:45:15.831 "dma_device_id": "system", 00:45:15.831 "dma_device_type": 1 00:45:15.831 } 00:45:15.831 ], 00:45:15.831 "name": "Nvme0n1", 00:45:15.831 "num_blocks": 38912, 00:45:15.831 "numa_id": -1, 00:45:15.831 "product_name": "NVMe disk", 00:45:15.831 "supported_io_types": { 00:45:15.831 "abort": true, 00:45:15.831 "compare": true, 00:45:15.831 "compare_and_write": true, 00:45:15.831 "copy": true, 00:45:15.831 "flush": true, 00:45:15.831 "get_zone_info": false, 00:45:15.831 "nvme_admin": true, 00:45:15.831 "nvme_io": true, 00:45:15.831 "nvme_io_md": false, 00:45:15.831 "nvme_iov_md": false, 00:45:15.831 "read": true, 00:45:15.831 "reset": true, 00:45:15.831 "seek_data": false, 00:45:15.831 "seek_hole": false, 00:45:15.831 "unmap": true, 00:45:15.831 "write": true, 00:45:15.831 "write_zeroes": true, 00:45:15.831 "zcopy": false, 00:45:15.831 "zone_append": false, 00:45:15.831 "zone_management": false 00:45:15.831 }, 00:45:15.831 "uuid": "abad1970-2e87-4ed2-a06b-b14b0ccd842c", 00:45:15.831 "zoned": false 00:45:15.831 } 00:45:15.831 ] 00:45:15.831 20:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:45:15.831 20:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=101996 00:45:15.831 20:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:45:15.831 Running I/O for 10 seconds... 00:45:16.768 Latency(us) 00:45:16.768 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:16.768 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:45:16.768 Nvme0n1 : 1.00 6461.00 25.24 0.00 0.00 0.00 0.00 0.00 00:45:16.768 =================================================================================================================== 00:45:16.768 Total : 6461.00 25.24 0.00 0.00 0.00 0.00 0.00 00:45:16.768 00:45:17.705 20:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 32a4294c-d654-49d2-8719-f57b741ffe38 00:45:17.964 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:45:17.964 Nvme0n1 : 2.00 6744.00 26.34 0.00 0.00 0.00 0.00 0.00 00:45:17.964 =================================================================================================================== 00:45:17.964 Total : 6744.00 26.34 0.00 0.00 0.00 0.00 0.00 00:45:17.964 00:45:18.223 true 00:45:18.223 20:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32a4294c-d654-49d2-8719-f57b741ffe38 00:45:18.223 20:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:45:18.482 20:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:45:18.482 20:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:45:18.482 20:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 101996 00:45:18.741 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:45:18.741 Nvme0n1 : 3.00 6769.67 26.44 0.00 0.00 0.00 0.00 0.00 00:45:18.741 =================================================================================================================== 00:45:18.741 Total : 6769.67 26.44 0.00 0.00 0.00 0.00 0.00 00:45:18.741 00:45:20.118 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:45:20.118 Nvme0n1 : 4.00 6769.00 26.44 0.00 0.00 0.00 0.00 0.00 00:45:20.118 =================================================================================================================== 00:45:20.118 Total : 6769.00 26.44 0.00 0.00 0.00 0.00 0.00 00:45:20.118 00:45:21.054 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:45:21.054 Nvme0n1 : 5.00 6767.80 26.44 0.00 0.00 0.00 0.00 0.00 00:45:21.054 =================================================================================================================== 00:45:21.054 Total : 6767.80 26.44 0.00 0.00 0.00 0.00 0.00 00:45:21.054 00:45:21.991 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:45:21.991 Nvme0n1 : 6.00 6747.33 26.36 0.00 0.00 0.00 0.00 0.00 00:45:21.991 =================================================================================================================== 00:45:21.991 Total : 6747.33 26.36 0.00 0.00 0.00 0.00 0.00 00:45:21.991 00:45:22.926 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:45:22.926 Nvme0n1 : 7.00 6716.57 26.24 0.00 0.00 0.00 0.00 0.00 00:45:22.926 =================================================================================================================== 00:45:22.926 Total : 6716.57 26.24 0.00 0.00 0.00 0.00 0.00 00:45:22.926 00:45:23.861 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:45:23.861 Nvme0n1 : 8.00 6707.25 26.20 0.00 0.00 0.00 0.00 0.00 00:45:23.861 =================================================================================================================== 00:45:23.861 Total : 6707.25 26.20 0.00 0.00 0.00 0.00 0.00 00:45:23.861 00:45:24.797 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:45:24.797 Nvme0n1 : 9.00 6688.22 26.13 0.00 0.00 0.00 0.00 0.00 00:45:24.797 =================================================================================================================== 00:45:24.797 Total : 6688.22 26.13 0.00 0.00 0.00 0.00 0.00 00:45:24.797 00:45:26.176 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:45:26.176 Nvme0n1 : 10.00 6668.80 26.05 0.00 0.00 0.00 0.00 0.00 00:45:26.176 =================================================================================================================== 00:45:26.176 Total : 6668.80 26.05 0.00 0.00 0.00 0.00 0.00 00:45:26.176 00:45:26.176 00:45:26.176 Latency(us) 00:45:26.176 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:26.176 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:45:26.176 Nvme0n1 : 10.01 6672.57 26.06 0.00 0.00 19175.94 9055.88 42896.29 00:45:26.176 =================================================================================================================== 00:45:26.176 Total : 6672.57 26.06 0.00 0.00 19175.94 9055.88 42896.29 00:45:26.176 { 00:45:26.176 "results": [ 00:45:26.176 { 00:45:26.176 "job": "Nvme0n1", 00:45:26.176 "core_mask": "0x2", 00:45:26.176 "workload": "randwrite", 00:45:26.176 "status": "finished", 00:45:26.176 "queue_depth": 128, 00:45:26.176 "io_size": 4096, 00:45:26.176 "runtime": 10.013531, 00:45:26.176 "iops": 6672.57134371482, 00:45:26.176 "mibps": 26.064731811386014, 00:45:26.176 "io_failed": 0, 00:45:26.176 "io_timeout": 0, 00:45:26.176 "avg_latency_us": 19175.94126088471, 00:45:26.176 "min_latency_us": 9055.883636363636, 00:45:26.176 "max_latency_us": 42896.29090909091 00:45:26.176 } 00:45:26.176 ], 00:45:26.176 "core_count": 1 00:45:26.176 } 00:45:26.176 20:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 101954 00:45:26.176 20:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 101954 ']' 00:45:26.176 20:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 101954 00:45:26.176 20:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:45:26.176 20:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:26.176 20:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 101954 00:45:26.176 20:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:45:26.176 20:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:45:26.176 killing process with pid 101954 00:45:26.176 20:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 101954' 00:45:26.176 20:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 101954 00:45:26.176 Received shutdown signal, test time was about 10.000000 seconds 00:45:26.176 00:45:26.176 Latency(us) 00:45:26.177 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:26.177 =================================================================================================================== 00:45:26.177 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:26.177 20:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 101954 00:45:26.177 20:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:45:26.436 20:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:26.694 20:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32a4294c-d654-49d2-8719-f57b741ffe38 00:45:26.694 20:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:45:26.694 20:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:45:26.694 20:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:45:26.694 20:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:45:26.954 [2024-10-01 20:44:22.162364] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:45:26.954 20:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32a4294c-d654-49d2-8719-f57b741ffe38 00:45:26.954 20:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:45:26.954 20:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32a4294c-d654-49d2-8719-f57b741ffe38 00:45:26.954 20:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:45:26.954 20:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:26.954 20:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:45:26.954 20:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:26.954 20:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:45:26.954 20:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:26.954 20:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:45:26.954 20:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:45:26.954 20:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32a4294c-d654-49d2-8719-f57b741ffe38 00:45:27.213 2024/10/01 20:44:22 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:32a4294c-d654-49d2-8719-f57b741ffe38], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:45:27.213 request: 00:45:27.213 { 00:45:27.213 "method": "bdev_lvol_get_lvstores", 00:45:27.213 "params": { 00:45:27.213 "uuid": "32a4294c-d654-49d2-8719-f57b741ffe38" 00:45:27.213 } 00:45:27.213 } 00:45:27.213 Got JSON-RPC error response 00:45:27.213 GoRPCClient: error on JSON-RPC call 00:45:27.213 20:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:45:27.213 20:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:45:27.213 20:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:45:27.213 20:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:45:27.213 20:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:45:27.472 aio_bdev 00:45:27.472 20:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev abad1970-2e87-4ed2-a06b-b14b0ccd842c 00:45:27.472 20:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=abad1970-2e87-4ed2-a06b-b14b0ccd842c 00:45:27.472 20:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:45:27.473 20:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:45:27.473 20:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:45:27.473 20:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:45:27.473 20:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:45:27.731 20:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b abad1970-2e87-4ed2-a06b-b14b0ccd842c -t 2000 00:45:27.989 [ 00:45:27.990 { 00:45:27.990 "aliases": [ 00:45:27.990 "lvs/lvol" 00:45:27.990 ], 00:45:27.990 "assigned_rate_limits": { 00:45:27.990 "r_mbytes_per_sec": 0, 00:45:27.990 "rw_ios_per_sec": 0, 00:45:27.990 "rw_mbytes_per_sec": 0, 00:45:27.990 "w_mbytes_per_sec": 0 00:45:27.990 }, 00:45:27.990 "block_size": 4096, 00:45:27.990 "claimed": false, 00:45:27.990 "driver_specific": { 00:45:27.990 "lvol": { 00:45:27.990 "base_bdev": "aio_bdev", 00:45:27.990 "clone": false, 00:45:27.990 "esnap_clone": false, 00:45:27.990 "lvol_store_uuid": "32a4294c-d654-49d2-8719-f57b741ffe38", 00:45:27.990 "num_allocated_clusters": 38, 00:45:27.990 "snapshot": false, 00:45:27.990 "thin_provision": false 00:45:27.990 } 00:45:27.990 }, 00:45:27.990 "name": "abad1970-2e87-4ed2-a06b-b14b0ccd842c", 00:45:27.990 "num_blocks": 38912, 00:45:27.990 "product_name": "Logical Volume", 00:45:27.990 "supported_io_types": { 00:45:27.990 "abort": false, 00:45:27.990 "compare": false, 00:45:27.990 "compare_and_write": false, 00:45:27.990 "copy": false, 00:45:27.990 "flush": false, 00:45:27.990 "get_zone_info": false, 00:45:27.990 "nvme_admin": false, 00:45:27.990 "nvme_io": false, 00:45:27.990 "nvme_io_md": false, 00:45:27.990 "nvme_iov_md": false, 00:45:27.990 "read": true, 00:45:27.990 "reset": true, 00:45:27.990 "seek_data": true, 00:45:27.990 "seek_hole": true, 00:45:27.990 "unmap": true, 00:45:27.990 "write": true, 00:45:27.990 "write_zeroes": true, 00:45:27.990 "zcopy": false, 00:45:27.990 "zone_append": false, 00:45:27.990 "zone_management": false 00:45:27.990 }, 00:45:27.990 "uuid": "abad1970-2e87-4ed2-a06b-b14b0ccd842c", 00:45:27.990 "zoned": false 00:45:27.990 } 00:45:27.990 ] 00:45:27.990 20:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:45:27.990 20:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32a4294c-d654-49d2-8719-f57b741ffe38 00:45:27.990 20:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:45:28.247 20:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:45:28.247 20:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:45:28.247 20:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32a4294c-d654-49d2-8719-f57b741ffe38 00:45:28.505 20:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:45:28.505 20:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete abad1970-2e87-4ed2-a06b-b14b0ccd842c 00:45:28.505 20:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 32a4294c-d654-49d2-8719-f57b741ffe38 00:45:29.072 20:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:45:29.072 20:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:45:29.639 00:45:29.639 real 0m18.016s 00:45:29.639 user 0m17.522s 00:45:29.639 sys 0m2.019s 00:45:29.639 20:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:29.639 ************************************ 00:45:29.639 END TEST lvs_grow_clean 00:45:29.639 ************************************ 00:45:29.639 20:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:45:29.639 20:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:45:29.639 20:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:45:29.639 20:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:29.639 20:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:45:29.639 ************************************ 00:45:29.639 START TEST lvs_grow_dirty 00:45:29.639 ************************************ 00:45:29.639 20:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:45:29.639 20:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:45:29.639 20:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:45:29.639 20:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:45:29.639 20:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:45:29.639 20:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:45:29.639 20:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:45:29.639 20:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:45:29.639 20:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:45:29.639 20:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:45:29.897 20:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:45:29.897 20:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:45:30.155 20:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=81e7b77b-dbe0-4141-a652-b000ff0a22ea 00:45:30.155 20:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81e7b77b-dbe0-4141-a652-b000ff0a22ea 00:45:30.155 20:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:45:30.413 20:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:45:30.413 20:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:45:30.413 20:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 81e7b77b-dbe0-4141-a652-b000ff0a22ea lvol 150 00:45:30.671 20:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=4ab5e733-b827-4703-9d6f-62fe3b08a61a 00:45:30.671 20:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:45:30.671 20:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:45:30.940 [2024-10-01 20:44:25.970277] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:45:30.940 [2024-10-01 20:44:25.970430] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:45:30.940 true 00:45:30.940 20:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:45:30.940 20:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81e7b77b-dbe0-4141-a652-b000ff0a22ea 00:45:31.246 20:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:45:31.246 20:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:45:31.515 20:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4ab5e733-b827-4703-9d6f-62fe3b08a61a 00:45:31.773 20:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:45:32.032 [2024-10-01 20:44:27.058668] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:45:32.032 20:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:45:32.032 20:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=102376 00:45:32.032 20:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:45:32.032 20:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:45:32.032 20:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 102376 /var/tmp/bdevperf.sock 00:45:32.032 20:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 102376 ']' 00:45:32.032 20:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:45:32.032 20:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:32.032 20:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:45:32.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:45:32.032 20:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:32.032 20:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:45:32.291 [2024-10-01 20:44:27.339354] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:45:32.291 [2024-10-01 20:44:27.339462] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102376 ] 00:45:32.291 [2024-10-01 20:44:27.476105] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:32.291 [2024-10-01 20:44:27.528285] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:45:33.225 20:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:33.225 20:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:45:33.225 20:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:45:33.483 Nvme0n1 00:45:33.483 20:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:45:33.742 [ 00:45:33.742 { 00:45:33.742 "aliases": [ 00:45:33.742 "4ab5e733-b827-4703-9d6f-62fe3b08a61a" 00:45:33.742 ], 00:45:33.742 "assigned_rate_limits": { 00:45:33.742 "r_mbytes_per_sec": 0, 00:45:33.742 "rw_ios_per_sec": 0, 00:45:33.742 "rw_mbytes_per_sec": 0, 00:45:33.742 "w_mbytes_per_sec": 0 00:45:33.742 }, 00:45:33.742 "block_size": 4096, 00:45:33.742 "claimed": false, 00:45:33.742 "driver_specific": { 00:45:33.742 "mp_policy": "active_passive", 00:45:33.742 "nvme": [ 00:45:33.742 { 00:45:33.742 "ctrlr_data": { 00:45:33.742 "ana_reporting": false, 00:45:33.742 "cntlid": 1, 00:45:33.742 "firmware_revision": "25.01", 00:45:33.742 "model_number": "SPDK bdev Controller", 00:45:33.742 "multi_ctrlr": true, 00:45:33.742 "oacs": { 00:45:33.742 "firmware": 0, 00:45:33.742 "format": 0, 00:45:33.742 "ns_manage": 0, 00:45:33.742 "security": 0 00:45:33.742 }, 00:45:33.742 "serial_number": "SPDK0", 00:45:33.742 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:33.742 "vendor_id": "0x8086" 00:45:33.742 }, 00:45:33.742 "ns_data": { 00:45:33.742 "can_share": true, 00:45:33.742 "id": 1 00:45:33.742 }, 00:45:33.742 "trid": { 00:45:33.742 "adrfam": "IPv4", 00:45:33.742 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:33.742 "traddr": "10.0.0.3", 00:45:33.742 "trsvcid": "4420", 00:45:33.742 "trtype": "TCP" 00:45:33.742 }, 00:45:33.742 "vs": { 00:45:33.742 "nvme_version": "1.3" 00:45:33.742 } 00:45:33.742 } 00:45:33.742 ] 00:45:33.742 }, 00:45:33.742 "memory_domains": [ 00:45:33.742 { 00:45:33.742 "dma_device_id": "system", 00:45:33.742 "dma_device_type": 1 00:45:33.742 } 00:45:33.742 ], 00:45:33.742 "name": "Nvme0n1", 00:45:33.742 "num_blocks": 38912, 00:45:33.742 "numa_id": -1, 00:45:33.742 "product_name": "NVMe disk", 00:45:33.742 "supported_io_types": { 00:45:33.742 "abort": true, 00:45:33.742 "compare": true, 00:45:33.742 "compare_and_write": true, 00:45:33.742 "copy": true, 00:45:33.742 "flush": true, 00:45:33.742 "get_zone_info": false, 00:45:33.742 "nvme_admin": true, 00:45:33.742 "nvme_io": true, 00:45:33.742 "nvme_io_md": false, 00:45:33.742 "nvme_iov_md": false, 00:45:33.742 "read": true, 00:45:33.742 "reset": true, 00:45:33.742 "seek_data": false, 00:45:33.742 "seek_hole": false, 00:45:33.742 "unmap": true, 00:45:33.742 "write": true, 00:45:33.742 "write_zeroes": true, 00:45:33.742 "zcopy": false, 00:45:33.742 "zone_append": false, 00:45:33.742 "zone_management": false 00:45:33.742 }, 00:45:33.742 "uuid": "4ab5e733-b827-4703-9d6f-62fe3b08a61a", 00:45:33.742 "zoned": false 00:45:33.742 } 00:45:33.742 ] 00:45:33.743 20:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:45:33.743 20:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=102424 00:45:33.743 20:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:45:33.743 Running I/O for 10 seconds... 00:45:34.680 Latency(us) 00:45:34.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:34.680 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:45:34.680 Nvme0n1 : 1.00 6378.00 24.91 0.00 0.00 0.00 0.00 0.00 00:45:34.680 =================================================================================================================== 00:45:34.680 Total : 6378.00 24.91 0.00 0.00 0.00 0.00 0.00 00:45:34.680 00:45:35.617 20:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 81e7b77b-dbe0-4141-a652-b000ff0a22ea 00:45:35.877 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:45:35.877 Nvme0n1 : 2.00 6649.00 25.97 0.00 0.00 0.00 0.00 0.00 00:45:35.877 =================================================================================================================== 00:45:35.877 Total : 6649.00 25.97 0.00 0.00 0.00 0.00 0.00 00:45:35.877 00:45:36.136 true 00:45:36.136 20:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81e7b77b-dbe0-4141-a652-b000ff0a22ea 00:45:36.136 20:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:45:36.395 20:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:45:36.395 20:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:45:36.395 20:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 102424 00:45:36.655 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:45:36.655 Nvme0n1 : 3.00 6688.67 26.13 0.00 0.00 0.00 0.00 0.00 00:45:36.655 =================================================================================================================== 00:45:36.655 Total : 6688.67 26.13 0.00 0.00 0.00 0.00 0.00 00:45:36.655 00:45:38.036 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:45:38.036 Nvme0n1 : 4.00 6688.75 26.13 0.00 0.00 0.00 0.00 0.00 00:45:38.036 =================================================================================================================== 00:45:38.036 Total : 6688.75 26.13 0.00 0.00 0.00 0.00 0.00 00:45:38.036 00:45:38.974 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:45:38.974 Nvme0n1 : 5.00 6707.20 26.20 0.00 0.00 0.00 0.00 0.00 00:45:38.974 =================================================================================================================== 00:45:38.974 Total : 6707.20 26.20 0.00 0.00 0.00 0.00 0.00 00:45:38.974 00:45:39.912 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:45:39.912 Nvme0n1 : 6.00 6711.83 26.22 0.00 0.00 0.00 0.00 0.00 00:45:39.912 =================================================================================================================== 00:45:39.912 Total : 6711.83 26.22 0.00 0.00 0.00 0.00 0.00 00:45:39.912 00:45:40.850 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:45:40.850 Nvme0n1 : 7.00 6567.43 25.65 0.00 0.00 0.00 0.00 0.00 00:45:40.850 =================================================================================================================== 00:45:40.850 Total : 6567.43 25.65 0.00 0.00 0.00 0.00 0.00 00:45:40.850 00:45:41.788 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:45:41.788 Nvme0n1 : 8.00 6562.38 25.63 0.00 0.00 0.00 0.00 0.00 00:45:41.788 =================================================================================================================== 00:45:41.788 Total : 6562.38 25.63 0.00 0.00 0.00 0.00 0.00 00:45:41.788 00:45:42.726 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:45:42.726 Nvme0n1 : 9.00 6567.11 25.65 0.00 0.00 0.00 0.00 0.00 00:45:42.726 =================================================================================================================== 00:45:42.726 Total : 6567.11 25.65 0.00 0.00 0.00 0.00 0.00 00:45:42.726 00:45:43.664 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:45:43.664 Nvme0n1 : 10.00 6584.80 25.72 0.00 0.00 0.00 0.00 0.00 00:45:43.664 =================================================================================================================== 00:45:43.664 Total : 6584.80 25.72 0.00 0.00 0.00 0.00 0.00 00:45:43.664 00:45:43.927 00:45:43.927 Latency(us) 00:45:43.927 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:43.927 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:45:43.927 Nvme0n1 : 10.02 6585.06 25.72 0.00 0.00 19431.81 6404.65 138221.38 00:45:43.927 =================================================================================================================== 00:45:43.927 Total : 6585.06 25.72 0.00 0.00 19431.81 6404.65 138221.38 00:45:43.927 { 00:45:43.927 "results": [ 00:45:43.927 { 00:45:43.927 "job": "Nvme0n1", 00:45:43.927 "core_mask": "0x2", 00:45:43.927 "workload": "randwrite", 00:45:43.927 "status": "finished", 00:45:43.927 "queue_depth": 128, 00:45:43.927 "io_size": 4096, 00:45:43.927 "runtime": 10.019047, 00:45:43.927 "iops": 6585.057441091953, 00:45:43.927 "mibps": 25.72288062926544, 00:45:43.927 "io_failed": 0, 00:45:43.927 "io_timeout": 0, 00:45:43.927 "avg_latency_us": 19431.806296945444, 00:45:43.927 "min_latency_us": 6404.654545454546, 00:45:43.927 "max_latency_us": 138221.3818181818 00:45:43.927 } 00:45:43.927 ], 00:45:43.927 "core_count": 1 00:45:43.927 } 00:45:43.927 20:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 102376 00:45:43.927 20:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 102376 ']' 00:45:43.927 20:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 102376 00:45:43.927 20:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:45:43.927 20:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:43.928 20:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 102376 00:45:43.928 20:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:45:43.928 20:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:45:43.928 20:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 102376' 00:45:43.928 killing process with pid 102376 00:45:43.928 Received shutdown signal, test time was about 10.000000 seconds 00:45:43.928 00:45:43.928 Latency(us) 00:45:43.928 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:43.928 =================================================================================================================== 00:45:43.928 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:43.928 20:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 102376 00:45:43.928 20:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 102376 00:45:43.928 20:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:45:44.498 20:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:44.498 20:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:45:44.498 20:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81e7b77b-dbe0-4141-a652-b000ff0a22ea 00:45:44.757 20:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:45:44.757 20:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:45:44.757 20:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 101793 00:45:44.757 20:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 101793 00:45:44.757 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 101793 Killed "${NVMF_APP[@]}" "$@" 00:45:44.757 20:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:45:44.757 20:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:45:44.757 20:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:45:44.757 20:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:44.757 20:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:45:44.757 20:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=102583 00:45:44.757 20:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:45:44.757 20:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 102583 00:45:44.757 20:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 102583 ']' 00:45:44.757 20:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:44.757 20:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:44.758 20:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:44.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:44.758 20:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:44.758 20:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:45:45.017 [2024-10-01 20:44:40.050749] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:45:45.018 [2024-10-01 20:44:40.051957] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:45:45.018 [2024-10-01 20:44:40.052032] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:45.018 [2024-10-01 20:44:40.190498] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:45.018 [2024-10-01 20:44:40.259674] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:45.018 [2024-10-01 20:44:40.259736] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:45.018 [2024-10-01 20:44:40.259756] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:45.018 [2024-10-01 20:44:40.259766] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:45.018 [2024-10-01 20:44:40.259775] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:45.018 [2024-10-01 20:44:40.259814] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:45:45.277 [2024-10-01 20:44:40.357673] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:45:45.277 [2024-10-01 20:44:40.357965] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:45:45.844 20:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:45.844 20:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:45:45.844 20:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:45:45.844 20:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:45.844 20:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:45:45.845 20:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:46.103 20:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:45:46.103 [2024-10-01 20:44:41.305705] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:45:46.103 [2024-10-01 20:44:41.306224] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:45:46.103 [2024-10-01 20:44:41.306523] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:45:46.103 20:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:45:46.103 20:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 4ab5e733-b827-4703-9d6f-62fe3b08a61a 00:45:46.103 20:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=4ab5e733-b827-4703-9d6f-62fe3b08a61a 00:45:46.103 20:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:45:46.103 20:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:45:46.103 20:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:45:46.103 20:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:45:46.103 20:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:45:46.671 20:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4ab5e733-b827-4703-9d6f-62fe3b08a61a -t 2000 00:45:46.671 [ 00:45:46.671 { 00:45:46.671 "aliases": [ 00:45:46.671 "lvs/lvol" 00:45:46.671 ], 00:45:46.671 "assigned_rate_limits": { 00:45:46.671 "r_mbytes_per_sec": 0, 00:45:46.671 "rw_ios_per_sec": 0, 00:45:46.671 "rw_mbytes_per_sec": 0, 00:45:46.671 "w_mbytes_per_sec": 0 00:45:46.671 }, 00:45:46.671 "block_size": 4096, 00:45:46.671 "claimed": false, 00:45:46.671 "driver_specific": { 00:45:46.671 "lvol": { 00:45:46.671 "base_bdev": "aio_bdev", 00:45:46.671 "clone": false, 00:45:46.671 "esnap_clone": false, 00:45:46.671 "lvol_store_uuid": "81e7b77b-dbe0-4141-a652-b000ff0a22ea", 00:45:46.671 "num_allocated_clusters": 38, 00:45:46.671 "snapshot": false, 00:45:46.671 "thin_provision": false 00:45:46.671 } 00:45:46.671 }, 00:45:46.671 "name": "4ab5e733-b827-4703-9d6f-62fe3b08a61a", 00:45:46.671 "num_blocks": 38912, 00:45:46.671 "product_name": "Logical Volume", 00:45:46.671 "supported_io_types": { 00:45:46.671 "abort": false, 00:45:46.671 "compare": false, 00:45:46.671 "compare_and_write": false, 00:45:46.671 "copy": false, 00:45:46.671 "flush": false, 00:45:46.671 "get_zone_info": false, 00:45:46.671 "nvme_admin": false, 00:45:46.671 "nvme_io": false, 00:45:46.671 "nvme_io_md": false, 00:45:46.671 "nvme_iov_md": false, 00:45:46.671 "read": true, 00:45:46.671 "reset": true, 00:45:46.671 "seek_data": true, 00:45:46.671 "seek_hole": true, 00:45:46.671 "unmap": true, 00:45:46.671 "write": true, 00:45:46.671 "write_zeroes": true, 00:45:46.671 "zcopy": false, 00:45:46.671 "zone_append": false, 00:45:46.671 "zone_management": false 00:45:46.671 }, 00:45:46.671 "uuid": "4ab5e733-b827-4703-9d6f-62fe3b08a61a", 00:45:46.671 "zoned": false 00:45:46.671 } 00:45:46.671 ] 00:45:46.671 20:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:45:46.671 20:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81e7b77b-dbe0-4141-a652-b000ff0a22ea 00:45:46.672 20:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:45:46.931 20:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:45:46.931 20:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81e7b77b-dbe0-4141-a652-b000ff0a22ea 00:45:46.931 20:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:45:47.190 20:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:45:47.190 20:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:45:47.457 [2024-10-01 20:44:42.592545] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:45:47.457 20:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81e7b77b-dbe0-4141-a652-b000ff0a22ea 00:45:47.457 20:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:45:47.457 20:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81e7b77b-dbe0-4141-a652-b000ff0a22ea 00:45:47.457 20:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:45:47.457 20:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:47.457 20:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:45:47.457 20:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:47.457 20:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:45:47.457 20:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:47.457 20:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:45:47.457 20:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:45:47.457 20:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81e7b77b-dbe0-4141-a652-b000ff0a22ea 00:45:47.717 2024/10/01 20:44:42 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:81e7b77b-dbe0-4141-a652-b000ff0a22ea], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:45:47.717 request: 00:45:47.717 { 00:45:47.717 "method": "bdev_lvol_get_lvstores", 00:45:47.717 "params": { 00:45:47.717 "uuid": "81e7b77b-dbe0-4141-a652-b000ff0a22ea" 00:45:47.717 } 00:45:47.717 } 00:45:47.717 Got JSON-RPC error response 00:45:47.717 GoRPCClient: error on JSON-RPC call 00:45:47.717 20:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:45:47.717 20:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:45:47.717 20:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:45:47.717 20:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:45:47.717 20:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:45:47.976 aio_bdev 00:45:47.976 20:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4ab5e733-b827-4703-9d6f-62fe3b08a61a 00:45:47.976 20:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=4ab5e733-b827-4703-9d6f-62fe3b08a61a 00:45:47.976 20:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:45:47.976 20:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:45:47.976 20:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:45:47.976 20:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:45:47.976 20:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:45:48.236 20:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4ab5e733-b827-4703-9d6f-62fe3b08a61a -t 2000 00:45:48.495 [ 00:45:48.495 { 00:45:48.495 "aliases": [ 00:45:48.495 "lvs/lvol" 00:45:48.495 ], 00:45:48.495 "assigned_rate_limits": { 00:45:48.495 "r_mbytes_per_sec": 0, 00:45:48.495 "rw_ios_per_sec": 0, 00:45:48.495 "rw_mbytes_per_sec": 0, 00:45:48.495 "w_mbytes_per_sec": 0 00:45:48.495 }, 00:45:48.495 "block_size": 4096, 00:45:48.495 "claimed": false, 00:45:48.495 "driver_specific": { 00:45:48.495 "lvol": { 00:45:48.495 "base_bdev": "aio_bdev", 00:45:48.495 "clone": false, 00:45:48.495 "esnap_clone": false, 00:45:48.495 "lvol_store_uuid": "81e7b77b-dbe0-4141-a652-b000ff0a22ea", 00:45:48.495 "num_allocated_clusters": 38, 00:45:48.495 "snapshot": false, 00:45:48.495 "thin_provision": false 00:45:48.495 } 00:45:48.495 }, 00:45:48.495 "name": "4ab5e733-b827-4703-9d6f-62fe3b08a61a", 00:45:48.495 "num_blocks": 38912, 00:45:48.495 "product_name": "Logical Volume", 00:45:48.495 "supported_io_types": { 00:45:48.495 "abort": false, 00:45:48.495 "compare": false, 00:45:48.495 "compare_and_write": false, 00:45:48.495 "copy": false, 00:45:48.495 "flush": false, 00:45:48.495 "get_zone_info": false, 00:45:48.495 "nvme_admin": false, 00:45:48.495 "nvme_io": false, 00:45:48.495 "nvme_io_md": false, 00:45:48.495 "nvme_iov_md": false, 00:45:48.495 "read": true, 00:45:48.495 "reset": true, 00:45:48.495 "seek_data": true, 00:45:48.495 "seek_hole": true, 00:45:48.495 "unmap": true, 00:45:48.495 "write": true, 00:45:48.495 "write_zeroes": true, 00:45:48.495 "zcopy": false, 00:45:48.495 "zone_append": false, 00:45:48.495 "zone_management": false 00:45:48.495 }, 00:45:48.495 "uuid": "4ab5e733-b827-4703-9d6f-62fe3b08a61a", 00:45:48.495 "zoned": false 00:45:48.495 } 00:45:48.495 ] 00:45:48.495 20:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:45:48.495 20:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81e7b77b-dbe0-4141-a652-b000ff0a22ea 00:45:48.495 20:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:45:48.754 20:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:45:48.754 20:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 81e7b77b-dbe0-4141-a652-b000ff0a22ea 00:45:48.754 20:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:45:49.014 20:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:45:49.014 20:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4ab5e733-b827-4703-9d6f-62fe3b08a61a 00:45:49.273 20:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 81e7b77b-dbe0-4141-a652-b000ff0a22ea 00:45:49.532 20:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:45:49.792 20:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:45:50.051 00:45:50.051 real 0m20.521s 00:45:50.051 user 0m26.523s 00:45:50.051 sys 0m9.443s 00:45:50.051 20:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:50.051 20:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:45:50.051 ************************************ 00:45:50.051 END TEST lvs_grow_dirty 00:45:50.051 ************************************ 00:45:50.051 20:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:45:50.051 20:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:45:50.051 20:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:45:50.051 20:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:45:50.051 20:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:45:50.051 20:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:45:50.051 20:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:45:50.051 20:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:45:50.051 20:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:45:50.051 nvmf_trace.0 00:45:50.309 20:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:45:50.309 20:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:45:50.309 20:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:45:50.309 20:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:45:50.876 20:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:50.876 20:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:45:50.876 20:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:50.876 20:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:50.876 rmmod nvme_tcp 00:45:50.876 rmmod nvme_fabrics 00:45:50.876 rmmod nvme_keyring 00:45:50.876 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:50.876 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:45:50.876 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:45:50.876 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 102583 ']' 00:45:50.876 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 102583 00:45:50.876 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 102583 ']' 00:45:50.876 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 102583 00:45:50.876 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:45:50.876 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:50.876 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 102583 00:45:50.876 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:45:50.876 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:45:50.876 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 102583' 00:45:50.876 killing process with pid 102583 00:45:50.876 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 102583 00:45:50.876 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 102583 00:45:51.138 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:45:51.138 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:45:51.138 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:45:51.138 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:45:51.138 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:45:51.138 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:45:51.138 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:45:51.138 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:51.138 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:45:51.138 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:45:51.138 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:45:51.138 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:45:51.138 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:45:51.138 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:45:51.138 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:45:51.461 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:45:51.461 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:45:51.461 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:45:51.461 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:45:51.461 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:45:51.461 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:45:51.461 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:45:51.461 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:45:51.461 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:51.461 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:51.461 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:51.461 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:45:51.461 ************************************ 00:45:51.461 END TEST nvmf_lvs_grow 00:45:51.461 ************************************ 00:45:51.461 00:45:51.461 real 0m41.836s 00:45:51.461 user 0m45.477s 00:45:51.461 sys 0m12.782s 00:45:51.461 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:51.461 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:45:51.461 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:45:51.461 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:45:51.461 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:51.461 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:45:51.461 ************************************ 00:45:51.461 START TEST nvmf_bdev_io_wait 00:45:51.461 ************************************ 00:45:51.461 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:45:51.721 * Looking for test storage... 00:45:51.721 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:45:51.721 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:45:51.721 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:45:51.721 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:45:51.721 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:45:51.721 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:51.721 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:51.721 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:51.721 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:45:51.721 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:45:51.721 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:45:51.721 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:45:51.721 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:45:51.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:51.722 --rc genhtml_branch_coverage=1 00:45:51.722 --rc genhtml_function_coverage=1 00:45:51.722 --rc genhtml_legend=1 00:45:51.722 --rc geninfo_all_blocks=1 00:45:51.722 --rc geninfo_unexecuted_blocks=1 00:45:51.722 00:45:51.722 ' 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:45:51.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:51.722 --rc genhtml_branch_coverage=1 00:45:51.722 --rc genhtml_function_coverage=1 00:45:51.722 --rc genhtml_legend=1 00:45:51.722 --rc geninfo_all_blocks=1 00:45:51.722 --rc geninfo_unexecuted_blocks=1 00:45:51.722 00:45:51.722 ' 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:45:51.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:51.722 --rc genhtml_branch_coverage=1 00:45:51.722 --rc genhtml_function_coverage=1 00:45:51.722 --rc genhtml_legend=1 00:45:51.722 --rc geninfo_all_blocks=1 00:45:51.722 --rc geninfo_unexecuted_blocks=1 00:45:51.722 00:45:51.722 ' 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:45:51.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:51.722 --rc genhtml_branch_coverage=1 00:45:51.722 --rc genhtml_function_coverage=1 00:45:51.722 --rc genhtml_legend=1 00:45:51.722 --rc geninfo_all_blocks=1 00:45:51.722 --rc geninfo_unexecuted_blocks=1 00:45:51.722 00:45:51.722 ' 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:51.722 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # nvmf_veth_init 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:45:51.723 Cannot find device "nvmf_init_br" 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:45:51.723 Cannot find device "nvmf_init_br2" 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:45:51.723 Cannot find device "nvmf_tgt_br" 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:45:51.723 Cannot find device "nvmf_tgt_br2" 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:45:51.723 Cannot find device "nvmf_init_br" 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:45:51.723 Cannot find device "nvmf_init_br2" 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:45:51.723 Cannot find device "nvmf_tgt_br" 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:45:51.723 Cannot find device "nvmf_tgt_br2" 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:45:51.723 Cannot find device "nvmf_br" 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:45:51.723 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:45:51.981 Cannot find device "nvmf_init_if" 00:45:51.981 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:45:51.981 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:45:51.981 Cannot find device "nvmf_init_if2" 00:45:51.982 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:45:51.982 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:45:51.982 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:51.982 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:45:51.982 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:45:51.982 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:51.982 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:45:51.982 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:45:51.982 20:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:45:51.982 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:45:51.982 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:45:51.982 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:45:51.982 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:45:51.982 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:45:51.982 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:45:51.982 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:45:51.982 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:45:51.982 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:45:51.982 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:45:51.982 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:45:51.982 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:45:51.982 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:45:51.982 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:45:51.982 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:45:51.982 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:45:51.982 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:45:51.982 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:45:51.982 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:45:51.982 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:45:51.982 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:45:51.982 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:45:51.982 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:45:51.982 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:45:52.240 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:45:52.240 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:45:52.240 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:45:52.240 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:45:52.240 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:45:52.240 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:45:52.240 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:45:52.240 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:45:52.240 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:45:52.240 00:45:52.240 --- 10.0.0.3 ping statistics --- 00:45:52.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:52.240 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:45:52.240 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:45:52.240 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:45:52.240 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:45:52.240 00:45:52.240 --- 10.0.0.4 ping statistics --- 00:45:52.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:52.240 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:45:52.240 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:45:52.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:52.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:45:52.240 00:45:52.240 --- 10.0.0.1 ping statistics --- 00:45:52.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:52.240 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:45:52.240 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:45:52.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:52.241 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:45:52.241 00:45:52.241 --- 10.0.0.2 ping statistics --- 00:45:52.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:52.241 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:45:52.241 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:52.241 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # return 0 00:45:52.241 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:45:52.241 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:52.241 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:45:52.241 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:45:52.241 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:52.241 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:45:52.241 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:45:52.241 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:45:52.241 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:45:52.241 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:52.241 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:45:52.241 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=103055 00:45:52.241 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 103055 00:45:52.241 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:45:52.241 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 103055 ']' 00:45:52.241 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:52.241 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:52.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:52.241 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:52.241 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:52.241 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:45:52.241 [2024-10-01 20:44:47.359226] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:45:52.241 [2024-10-01 20:44:47.360495] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:45:52.241 [2024-10-01 20:44:47.360563] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:52.499 [2024-10-01 20:44:47.503261] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:45:52.499 [2024-10-01 20:44:47.576590] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:52.499 [2024-10-01 20:44:47.576658] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:52.499 [2024-10-01 20:44:47.576673] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:52.499 [2024-10-01 20:44:47.576683] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:52.499 [2024-10-01 20:44:47.576692] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:52.499 [2024-10-01 20:44:47.576772] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:45:52.499 [2024-10-01 20:44:47.576879] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:45:52.499 [2024-10-01 20:44:47.577578] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:45:52.499 [2024-10-01 20:44:47.577631] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:45:52.499 [2024-10-01 20:44:47.578153] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:45:52.499 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:52.499 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:45:52.499 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:45:52.499 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:52.499 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:45:52.499 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:52.499 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:45:52.499 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:52.499 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:45:52.499 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:52.499 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:45:52.500 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:52.500 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:45:52.500 [2024-10-01 20:44:47.732471] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:45:52.500 [2024-10-01 20:44:47.732656] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:45:52.500 [2024-10-01 20:44:47.733433] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:45:52.500 [2024-10-01 20:44:47.734011] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:45:52.500 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:52.500 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:45:52.500 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:52.500 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:45:52.500 [2024-10-01 20:44:47.746553] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:45:52.759 Malloc0 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:45:52.759 [2024-10-01 20:44:47.826637] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=103089 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=103091 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:45:52.759 { 00:45:52.759 "params": { 00:45:52.759 "name": "Nvme$subsystem", 00:45:52.759 "trtype": "$TEST_TRANSPORT", 00:45:52.759 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:52.759 "adrfam": "ipv4", 00:45:52.759 "trsvcid": "$NVMF_PORT", 00:45:52.759 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:52.759 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:52.759 "hdgst": ${hdgst:-false}, 00:45:52.759 "ddgst": ${ddgst:-false} 00:45:52.759 }, 00:45:52.759 "method": "bdev_nvme_attach_controller" 00:45:52.759 } 00:45:52.759 EOF 00:45:52.759 )") 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=103093 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:45:52.759 { 00:45:52.759 "params": { 00:45:52.759 "name": "Nvme$subsystem", 00:45:52.759 "trtype": "$TEST_TRANSPORT", 00:45:52.759 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:52.759 "adrfam": "ipv4", 00:45:52.759 "trsvcid": "$NVMF_PORT", 00:45:52.759 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:52.759 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:52.759 "hdgst": ${hdgst:-false}, 00:45:52.759 "ddgst": ${ddgst:-false} 00:45:52.759 }, 00:45:52.759 "method": "bdev_nvme_attach_controller" 00:45:52.759 } 00:45:52.759 EOF 00:45:52.759 )") 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=103096 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:45:52.759 { 00:45:52.759 "params": { 00:45:52.759 "name": "Nvme$subsystem", 00:45:52.759 "trtype": "$TEST_TRANSPORT", 00:45:52.759 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:52.759 "adrfam": "ipv4", 00:45:52.759 "trsvcid": "$NVMF_PORT", 00:45:52.759 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:52.759 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:52.759 "hdgst": ${hdgst:-false}, 00:45:52.759 "ddgst": ${ddgst:-false} 00:45:52.759 }, 00:45:52.759 "method": "bdev_nvme_attach_controller" 00:45:52.759 } 00:45:52.759 EOF 00:45:52.759 )") 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:45:52.759 { 00:45:52.759 "params": { 00:45:52.759 "name": "Nvme$subsystem", 00:45:52.759 "trtype": "$TEST_TRANSPORT", 00:45:52.759 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:52.759 "adrfam": "ipv4", 00:45:52.759 "trsvcid": "$NVMF_PORT", 00:45:52.759 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:52.759 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:52.759 "hdgst": ${hdgst:-false}, 00:45:52.759 "ddgst": ${ddgst:-false} 00:45:52.759 }, 00:45:52.759 "method": "bdev_nvme_attach_controller" 00:45:52.759 } 00:45:52.759 EOF 00:45:52.759 )") 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:45:52.759 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:45:52.760 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:45:52.760 "params": { 00:45:52.760 "name": "Nvme1", 00:45:52.760 "trtype": "tcp", 00:45:52.760 "traddr": "10.0.0.3", 00:45:52.760 "adrfam": "ipv4", 00:45:52.760 "trsvcid": "4420", 00:45:52.760 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:52.760 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:52.760 "hdgst": false, 00:45:52.760 "ddgst": false 00:45:52.760 }, 00:45:52.760 "method": "bdev_nvme_attach_controller" 00:45:52.760 }' 00:45:52.760 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:45:52.760 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:45:52.760 "params": { 00:45:52.760 "name": "Nvme1", 00:45:52.760 "trtype": "tcp", 00:45:52.760 "traddr": "10.0.0.3", 00:45:52.760 "adrfam": "ipv4", 00:45:52.760 "trsvcid": "4420", 00:45:52.760 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:52.760 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:52.760 "hdgst": false, 00:45:52.760 "ddgst": false 00:45:52.760 }, 00:45:52.760 "method": "bdev_nvme_attach_controller" 00:45:52.760 }' 00:45:52.760 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:45:52.760 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:45:52.760 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:45:52.760 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:45:52.760 "params": { 00:45:52.760 "name": "Nvme1", 00:45:52.760 "trtype": "tcp", 00:45:52.760 "traddr": "10.0.0.3", 00:45:52.760 "adrfam": "ipv4", 00:45:52.760 "trsvcid": "4420", 00:45:52.760 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:52.760 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:52.760 "hdgst": false, 00:45:52.760 "ddgst": false 00:45:52.760 }, 00:45:52.760 "method": "bdev_nvme_attach_controller" 00:45:52.760 }' 00:45:52.760 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:45:52.760 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:45:52.760 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:45:52.760 "params": { 00:45:52.760 "name": "Nvme1", 00:45:52.760 "trtype": "tcp", 00:45:52.760 "traddr": "10.0.0.3", 00:45:52.760 "adrfam": "ipv4", 00:45:52.760 "trsvcid": "4420", 00:45:52.760 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:52.760 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:52.760 "hdgst": false, 00:45:52.760 "ddgst": false 00:45:52.760 }, 00:45:52.760 "method": "bdev_nvme_attach_controller" 00:45:52.760 }' 00:45:52.760 [2024-10-01 20:44:47.895718] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:45:52.760 [2024-10-01 20:44:47.895807] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:45:52.760 [2024-10-01 20:44:47.896179] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:45:52.760 [2024-10-01 20:44:47.896256] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:45:52.760 [2024-10-01 20:44:47.901622] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:45:52.760 [2024-10-01 20:44:47.901850] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:45:52.760 20:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 103089 00:45:52.760 [2024-10-01 20:44:47.942372] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:45:52.760 [2024-10-01 20:44:47.942487] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:45:53.019 [2024-10-01 20:44:48.075339] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:53.019 [2024-10-01 20:44:48.121579] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:53.019 [2024-10-01 20:44:48.131495] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:45:53.019 [2024-10-01 20:44:48.176672] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:45:53.019 [2024-10-01 20:44:48.191270] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:53.019 [2024-10-01 20:44:48.210863] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:53.019 [2024-10-01 20:44:48.264970] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:45:53.019 [2024-10-01 20:44:48.267897] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:45:53.277 Running I/O for 1 seconds... 00:45:53.277 Running I/O for 1 seconds... 00:45:53.277 Running I/O for 1 seconds... 00:45:53.535 Running I/O for 1 seconds... 00:45:54.101 9645.00 IOPS, 37.68 MiB/s 00:45:54.101 Latency(us) 00:45:54.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:54.101 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:45:54.101 Nvme1n1 : 1.01 9707.24 37.92 0.00 0.00 13131.76 7119.59 20971.52 00:45:54.101 =================================================================================================================== 00:45:54.101 Total : 9707.24 37.92 0.00 0.00 13131.76 7119.59 20971.52 00:45:54.360 7480.00 IOPS, 29.22 MiB/s 00:45:54.360 Latency(us) 00:45:54.360 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:54.360 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:45:54.360 Nvme1n1 : 1.01 7527.18 29.40 0.00 0.00 16904.22 4438.57 19660.80 00:45:54.360 =================================================================================================================== 00:45:54.360 Total : 7527.18 29.40 0.00 0.00 16904.22 4438.57 19660.80 00:45:54.360 176960.00 IOPS, 691.25 MiB/s 00:45:54.360 Latency(us) 00:45:54.360 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:54.360 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:45:54.360 Nvme1n1 : 1.00 176619.18 689.92 0.00 0.00 720.55 350.02 2278.87 00:45:54.360 =================================================================================================================== 00:45:54.360 Total : 176619.18 689.92 0.00 0.00 720.55 350.02 2278.87 00:45:54.360 20:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 103091 00:45:54.360 8729.00 IOPS, 34.10 MiB/s 00:45:54.360 Latency(us) 00:45:54.360 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:54.360 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:45:54.360 Nvme1n1 : 1.01 8810.30 34.42 0.00 0.00 14469.80 5928.03 20852.36 00:45:54.360 =================================================================================================================== 00:45:54.360 Total : 8810.30 34.42 0.00 0.00 14469.80 5928.03 20852.36 00:45:54.618 20:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 103093 00:45:54.618 20:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 103096 00:45:54.618 20:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:54.618 20:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:54.618 20:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:45:54.618 20:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:54.618 20:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:45:54.618 20:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:45:54.618 20:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:45:54.618 20:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:45:54.618 20:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:54.618 20:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:45:54.618 20:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:54.618 20:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:54.618 rmmod nvme_tcp 00:45:54.618 rmmod nvme_fabrics 00:45:54.877 rmmod nvme_keyring 00:45:54.877 20:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:54.877 20:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:45:54.877 20:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:45:54.877 20:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 103055 ']' 00:45:54.877 20:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 103055 00:45:54.877 20:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 103055 ']' 00:45:54.877 20:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 103055 00:45:54.877 20:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:45:54.877 20:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:54.877 20:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 103055 00:45:54.877 20:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:45:54.877 20:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:45:54.877 killing process with pid 103055 00:45:54.877 20:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 103055' 00:45:54.877 20:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 103055 00:45:54.878 20:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 103055 00:45:54.878 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:45:54.878 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:45:54.878 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:45:54.878 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:45:54.878 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:45:54.878 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:45:54.878 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:45:54.878 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:54.878 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:45:54.878 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:45:54.878 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:45:54.878 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:45:54.878 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:45:55.137 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:45:55.137 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:45:55.137 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:45:55.137 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:45:55.137 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:45:55.137 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:45:55.137 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:45:55.137 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:45:55.137 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:45:55.137 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:45:55.137 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:55.137 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:55.137 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:55.137 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:45:55.137 00:45:55.137 real 0m3.681s 00:45:55.137 user 0m13.056s 00:45:55.137 sys 0m2.764s 00:45:55.137 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:55.137 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:45:55.137 ************************************ 00:45:55.137 END TEST nvmf_bdev_io_wait 00:45:55.137 ************************************ 00:45:55.137 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:45:55.137 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:45:55.137 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:55.137 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:45:55.137 ************************************ 00:45:55.137 START TEST nvmf_queue_depth 00:45:55.137 ************************************ 00:45:55.137 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:45:55.396 * Looking for test storage... 00:45:55.396 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:45:55.396 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:45:55.396 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:45:55.396 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:45:55.396 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:45:55.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:55.397 --rc genhtml_branch_coverage=1 00:45:55.397 --rc genhtml_function_coverage=1 00:45:55.397 --rc genhtml_legend=1 00:45:55.397 --rc geninfo_all_blocks=1 00:45:55.397 --rc geninfo_unexecuted_blocks=1 00:45:55.397 00:45:55.397 ' 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:45:55.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:55.397 --rc genhtml_branch_coverage=1 00:45:55.397 --rc genhtml_function_coverage=1 00:45:55.397 --rc genhtml_legend=1 00:45:55.397 --rc geninfo_all_blocks=1 00:45:55.397 --rc geninfo_unexecuted_blocks=1 00:45:55.397 00:45:55.397 ' 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:45:55.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:55.397 --rc genhtml_branch_coverage=1 00:45:55.397 --rc genhtml_function_coverage=1 00:45:55.397 --rc genhtml_legend=1 00:45:55.397 --rc geninfo_all_blocks=1 00:45:55.397 --rc geninfo_unexecuted_blocks=1 00:45:55.397 00:45:55.397 ' 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:45:55.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:55.397 --rc genhtml_branch_coverage=1 00:45:55.397 --rc genhtml_function_coverage=1 00:45:55.397 --rc genhtml_legend=1 00:45:55.397 --rc geninfo_all_blocks=1 00:45:55.397 --rc geninfo_unexecuted_blocks=1 00:45:55.397 00:45:55.397 ' 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:55.397 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@458 -- # nvmf_veth_init 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:45:55.398 Cannot find device "nvmf_init_br" 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:45:55.398 Cannot find device "nvmf_init_br2" 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:45:55.398 Cannot find device "nvmf_tgt_br" 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:45:55.398 Cannot find device "nvmf_tgt_br2" 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:45:55.398 Cannot find device "nvmf_init_br" 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:45:55.398 Cannot find device "nvmf_init_br2" 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:45:55.398 Cannot find device "nvmf_tgt_br" 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:45:55.398 Cannot find device "nvmf_tgt_br2" 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:45:55.398 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:45:55.658 Cannot find device "nvmf_br" 00:45:55.658 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:45:55.658 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:45:55.658 Cannot find device "nvmf_init_if" 00:45:55.658 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:45:55.658 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:45:55.658 Cannot find device "nvmf_init_if2" 00:45:55.658 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:45:55.658 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:45:55.658 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:55.658 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:45:55.658 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:45:55.658 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:55.658 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:45:55.658 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:45:55.658 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:45:55.658 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:45:55.658 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:45:55.658 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:45:55.658 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:45:55.658 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:45:55.658 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:45:55.658 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:45:55.658 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:45:55.658 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:45:55.658 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:45:55.658 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:45:55.658 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:45:55.658 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:45:55.658 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:45:55.658 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:45:55.658 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:45:55.658 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:45:55.658 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:45:55.658 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:45:55.658 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:45:55.658 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:45:55.658 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:45:55.658 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:45:55.658 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:45:55.658 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:45:55.658 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:45:55.658 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:45:55.658 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:45:55.658 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:45:55.658 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:45:55.659 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:45:55.659 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:45:55.659 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:45:55.659 00:45:55.659 --- 10.0.0.3 ping statistics --- 00:45:55.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:55.659 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:45:55.659 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:45:55.659 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:45:55.659 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:45:55.659 00:45:55.659 --- 10.0.0.4 ping statistics --- 00:45:55.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:55.659 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:45:55.659 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:45:55.918 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:55.918 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:45:55.918 00:45:55.918 --- 10.0.0.1 ping statistics --- 00:45:55.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:55.918 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:45:55.918 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:45:55.918 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:55.918 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:45:55.918 00:45:55.918 --- 10.0.0.2 ping statistics --- 00:45:55.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:55.918 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:45:55.918 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:55.918 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@459 -- # return 0 00:45:55.918 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:45:55.918 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:55.918 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:45:55.918 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:45:55.918 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:55.918 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:45:55.918 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:45:55.918 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:45:55.918 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:45:55.918 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:55.918 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:45:55.918 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=103355 00:45:55.918 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:45:55.918 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 103355 00:45:55.918 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 103355 ']' 00:45:55.918 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:55.918 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:55.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:55.918 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:55.918 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:55.918 20:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:45:55.918 [2024-10-01 20:44:51.004140] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:45:55.918 [2024-10-01 20:44:51.005270] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:45:55.918 [2024-10-01 20:44:51.005363] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:55.918 [2024-10-01 20:44:51.144738] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:56.178 [2024-10-01 20:44:51.198474] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:56.178 [2024-10-01 20:44:51.198535] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:56.178 [2024-10-01 20:44:51.198545] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:56.178 [2024-10-01 20:44:51.198552] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:56.178 [2024-10-01 20:44:51.198558] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:56.178 [2024-10-01 20:44:51.198589] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:45:56.178 [2024-10-01 20:44:51.286910] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:45:56.178 [2024-10-01 20:44:51.287197] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:45:56.178 20:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:56.178 20:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:45:56.178 20:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:45:56.178 20:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:56.178 20:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:45:56.178 20:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:56.178 20:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:45:56.178 20:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:56.178 20:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:45:56.178 [2024-10-01 20:44:51.367474] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:56.178 20:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:56.178 20:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:45:56.178 20:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:56.178 20:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:45:56.178 Malloc0 00:45:56.178 20:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:56.178 20:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:45:56.178 20:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:56.178 20:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:45:56.178 20:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:56.178 20:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:45:56.178 20:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:56.178 20:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:45:56.178 20:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:56.178 20:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:45:56.178 20:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:56.178 20:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:45:56.178 [2024-10-01 20:44:51.415426] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:45:56.178 20:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:56.178 20:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=103392 00:45:56.178 20:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:45:56.178 20:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:45:56.178 20:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 103392 /var/tmp/bdevperf.sock 00:45:56.178 20:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 103392 ']' 00:45:56.178 20:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:45:56.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:45:56.179 20:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:56.179 20:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:45:56.179 20:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:56.179 20:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:45:56.439 [2024-10-01 20:44:51.481803] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:45:56.439 [2024-10-01 20:44:51.481905] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103392 ] 00:45:56.439 [2024-10-01 20:44:51.623101] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:56.439 [2024-10-01 20:44:51.676609] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:45:57.377 20:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:57.377 20:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:45:57.377 20:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:45:57.377 20:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:57.377 20:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:45:57.377 NVMe0n1 00:45:57.377 20:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:57.377 20:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:45:57.636 Running I/O for 10 seconds... 00:46:07.675 9916.00 IOPS, 38.73 MiB/s 10240.00 IOPS, 40.00 MiB/s 10433.67 IOPS, 40.76 MiB/s 10510.50 IOPS, 41.06 MiB/s 10643.80 IOPS, 41.58 MiB/s 10693.83 IOPS, 41.77 MiB/s 10767.29 IOPS, 42.06 MiB/s 10764.88 IOPS, 42.05 MiB/s 10760.44 IOPS, 42.03 MiB/s 10694.00 IOPS, 41.77 MiB/s 00:46:07.675 Latency(us) 00:46:07.675 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:07.675 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:46:07.675 Verification LBA range: start 0x0 length 0x4000 00:46:07.675 NVMe0n1 : 10.06 10729.48 41.91 0.00 0.00 95056.00 16443.58 64821.06 00:46:07.675 =================================================================================================================== 00:46:07.675 Total : 10729.48 41.91 0.00 0.00 95056.00 16443.58 64821.06 00:46:07.675 { 00:46:07.675 "results": [ 00:46:07.675 { 00:46:07.675 "job": "NVMe0n1", 00:46:07.675 "core_mask": "0x1", 00:46:07.675 "workload": "verify", 00:46:07.675 "status": "finished", 00:46:07.675 "verify_range": { 00:46:07.675 "start": 0, 00:46:07.675 "length": 16384 00:46:07.675 }, 00:46:07.675 "queue_depth": 1024, 00:46:07.675 "io_size": 4096, 00:46:07.675 "runtime": 10.062369, 00:46:07.675 "iops": 10729.48129809193, 00:46:07.675 "mibps": 41.912036320671604, 00:46:07.675 "io_failed": 0, 00:46:07.675 "io_timeout": 0, 00:46:07.675 "avg_latency_us": 95055.99910111452, 00:46:07.675 "min_latency_us": 16443.578181818182, 00:46:07.675 "max_latency_us": 64821.06181818182 00:46:07.675 } 00:46:07.675 ], 00:46:07.675 "core_count": 1 00:46:07.675 } 00:46:07.675 20:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 103392 00:46:07.675 20:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 103392 ']' 00:46:07.675 20:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 103392 00:46:07.675 20:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:46:07.675 20:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:07.675 20:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 103392 00:46:07.675 20:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:46:07.675 killing process with pid 103392 00:46:07.675 20:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:46:07.675 20:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 103392' 00:46:07.675 Received shutdown signal, test time was about 10.000000 seconds 00:46:07.675 00:46:07.675 Latency(us) 00:46:07.675 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:07.675 =================================================================================================================== 00:46:07.675 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:46:07.675 20:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 103392 00:46:07.675 20:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 103392 00:46:07.934 20:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:46:07.934 20:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:46:07.934 20:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:46:07.934 20:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:46:07.934 20:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:46:07.934 20:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:46:07.934 20:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:46:07.934 20:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:46:07.934 rmmod nvme_tcp 00:46:07.934 rmmod nvme_fabrics 00:46:07.934 rmmod nvme_keyring 00:46:07.934 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:46:07.934 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:46:07.934 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:46:07.934 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 103355 ']' 00:46:07.934 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 103355 00:46:07.934 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 103355 ']' 00:46:07.934 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 103355 00:46:07.934 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:46:07.934 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:07.934 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 103355 00:46:07.934 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:46:07.934 killing process with pid 103355 00:46:07.934 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:46:07.934 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 103355' 00:46:07.934 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 103355 00:46:07.934 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 103355 00:46:08.194 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:46:08.194 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:46:08.194 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:46:08.194 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:46:08.194 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:46:08.194 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:46:08.194 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:46:08.194 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:46:08.194 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:46:08.194 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:46:08.194 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:46:08.194 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:46:08.194 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:46:08.194 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:46:08.194 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:46:08.194 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:46:08.194 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:46:08.194 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:46:08.194 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:46:08.194 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:46:08.194 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:46:08.454 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:46:08.454 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:46:08.454 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:08.454 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:08.454 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:08.454 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:46:08.454 00:46:08.454 real 0m13.170s 00:46:08.454 user 0m22.290s 00:46:08.454 sys 0m2.310s 00:46:08.454 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:08.454 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:46:08.454 ************************************ 00:46:08.454 END TEST nvmf_queue_depth 00:46:08.454 ************************************ 00:46:08.454 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:46:08.454 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:46:08.454 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:08.454 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:46:08.454 ************************************ 00:46:08.454 START TEST nvmf_target_multipath 00:46:08.454 ************************************ 00:46:08.454 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:46:08.454 * Looking for test storage... 00:46:08.454 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:46:08.454 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:46:08.454 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:46:08.454 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:46:08.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:08.714 --rc genhtml_branch_coverage=1 00:46:08.714 --rc genhtml_function_coverage=1 00:46:08.714 --rc genhtml_legend=1 00:46:08.714 --rc geninfo_all_blocks=1 00:46:08.714 --rc geninfo_unexecuted_blocks=1 00:46:08.714 00:46:08.714 ' 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:46:08.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:08.714 --rc genhtml_branch_coverage=1 00:46:08.714 --rc genhtml_function_coverage=1 00:46:08.714 --rc genhtml_legend=1 00:46:08.714 --rc geninfo_all_blocks=1 00:46:08.714 --rc geninfo_unexecuted_blocks=1 00:46:08.714 00:46:08.714 ' 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:46:08.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:08.714 --rc genhtml_branch_coverage=1 00:46:08.714 --rc genhtml_function_coverage=1 00:46:08.714 --rc genhtml_legend=1 00:46:08.714 --rc geninfo_all_blocks=1 00:46:08.714 --rc geninfo_unexecuted_blocks=1 00:46:08.714 00:46:08.714 ' 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:46:08.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:08.714 --rc genhtml_branch_coverage=1 00:46:08.714 --rc genhtml_function_coverage=1 00:46:08.714 --rc genhtml_legend=1 00:46:08.714 --rc geninfo_all_blocks=1 00:46:08.714 --rc geninfo_unexecuted_blocks=1 00:46:08.714 00:46:08.714 ' 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:08.714 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@458 -- # nvmf_veth_init 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:46:08.715 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:46:08.715 Cannot find device "nvmf_init_br" 00:46:08.716 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:46:08.716 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:46:08.716 Cannot find device "nvmf_init_br2" 00:46:08.716 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:46:08.716 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:46:08.716 Cannot find device "nvmf_tgt_br" 00:46:08.716 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:46:08.716 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:46:08.716 Cannot find device "nvmf_tgt_br2" 00:46:08.716 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:46:08.716 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:46:08.716 Cannot find device "nvmf_init_br" 00:46:08.716 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:46:08.716 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:46:08.716 Cannot find device "nvmf_init_br2" 00:46:08.716 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:46:08.716 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:46:08.716 Cannot find device "nvmf_tgt_br" 00:46:08.716 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:46:08.716 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:46:08.716 Cannot find device "nvmf_tgt_br2" 00:46:08.716 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:46:08.716 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:46:08.716 Cannot find device "nvmf_br" 00:46:08.716 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:46:08.716 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:46:08.716 Cannot find device "nvmf_init_if" 00:46:08.716 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:46:08.716 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:46:08.716 Cannot find device "nvmf_init_if2" 00:46:08.716 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:46:08.716 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:46:08.716 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:46:08.716 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:46:08.716 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:46:08.716 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:46:08.716 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:46:08.716 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:46:08.716 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:46:08.716 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:46:08.716 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:46:08.716 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:46:08.716 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:46:08.975 20:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:46:08.975 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:46:08.975 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:46:08.975 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:46:08.975 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:46:08.975 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:46:08.975 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:46:08.975 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:46:08.975 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:46:08.975 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:46:08.975 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:46:08.975 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:46:08.975 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:46:08.975 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:46:08.975 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:46:08.975 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:46:08.975 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:46:08.975 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:46:08.975 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:46:08.975 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:46:08.975 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:46:08.975 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:46:08.975 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:46:08.975 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:46:08.975 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:46:08.975 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:46:08.975 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:46:08.975 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:46:08.975 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:46:08.975 00:46:08.975 --- 10.0.0.3 ping statistics --- 00:46:08.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:08.975 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:46:08.975 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:46:08.975 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:46:08.975 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:46:08.975 00:46:08.975 --- 10.0.0.4 ping statistics --- 00:46:08.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:08.975 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:46:08.975 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:46:08.975 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:08.975 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:46:08.975 00:46:08.975 --- 10.0.0.1 ping statistics --- 00:46:08.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:08.976 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:46:08.976 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:46:08.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:08.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:46:08.976 00:46:08.976 --- 10.0.0.2 ping statistics --- 00:46:08.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:08.976 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:46:08.976 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:08.976 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@459 -- # return 0 00:46:08.976 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:46:08.976 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:08.976 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:46:08.976 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:46:08.976 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:08.976 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:46:08.976 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:46:08.976 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:46:08.976 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:46:08.976 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:46:08.976 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:46:08.976 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:46:08.976 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:46:08.976 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@507 -- # nvmfpid=103769 00:46:08.976 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@508 -- # waitforlisten 103769 00:46:08.976 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@831 -- # '[' -z 103769 ']' 00:46:08.976 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:08.976 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:46:08.976 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:46:08.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:08.976 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:08.976 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:46:08.976 20:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:46:09.235 [2024-10-01 20:45:04.271408] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:46:09.236 [2024-10-01 20:45:04.273426] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:46:09.236 [2024-10-01 20:45:04.273508] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:09.236 [2024-10-01 20:45:04.415957] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:46:09.236 [2024-10-01 20:45:04.485943] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:09.236 [2024-10-01 20:45:04.486012] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:09.236 [2024-10-01 20:45:04.486026] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:09.236 [2024-10-01 20:45:04.486037] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:09.236 [2024-10-01 20:45:04.486046] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:09.236 [2024-10-01 20:45:04.486188] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:46:09.236 [2024-10-01 20:45:04.486464] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:46:09.236 [2024-10-01 20:45:04.487249] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:46:09.236 [2024-10-01 20:45:04.487272] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:46:09.495 [2024-10-01 20:45:04.590023] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:46:09.495 [2024-10-01 20:45:04.590353] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:46:09.495 [2024-10-01 20:45:04.590932] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:46:09.495 [2024-10-01 20:45:04.591177] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:46:09.495 [2024-10-01 20:45:04.591191] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:46:10.063 20:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:46:10.063 20:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@864 -- # return 0 00:46:10.063 20:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:46:10.063 20:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:46:10.063 20:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:46:10.063 20:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:10.063 20:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:46:10.638 [2024-10-01 20:45:05.580282] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:10.638 20:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:46:10.638 Malloc0 00:46:10.638 20:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:46:10.898 20:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:46:11.158 20:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:46:11.418 [2024-10-01 20:45:06.576188] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:46:11.418 20:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:46:11.677 [2024-10-01 20:45:06.860184] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:46:11.677 20:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid=e1b29161-2f6e-4e88-9903-c5274a848703 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:46:11.936 20:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid=e1b29161-2f6e-4e88-9903-c5274a848703 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:46:11.936 20:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:46:11.936 20:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:46:11.936 20:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:46:11.936 20:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:46:11.936 20:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:46:14.481 20:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:46:14.481 20:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:46:14.481 20:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:46:14.481 20:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:46:14.481 20:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:46:14.481 20:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:46:14.481 20:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:46:14.482 20:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:46:14.482 20:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:46:14.482 20:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:46:14.482 20:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:46:14.482 20:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:46:14.482 20:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:46:14.482 20:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:46:14.482 20:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:46:14.482 20:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:46:14.482 20:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:46:14.482 20:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:46:14.482 20:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:46:14.482 20:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:46:14.482 20:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:46:14.482 20:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:46:14.482 20:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:46:14.482 20:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:46:14.482 20:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:46:14.482 20:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:46:14.482 20:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:46:14.482 20:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:46:14.482 20:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:46:14.482 20:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:46:14.482 20:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:46:14.482 20:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:46:14.482 20:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=103908 00:46:14.482 20:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:46:14.482 20:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:46:14.482 [global] 00:46:14.482 thread=1 00:46:14.482 invalidate=1 00:46:14.482 rw=randrw 00:46:14.482 time_based=1 00:46:14.482 runtime=6 00:46:14.482 ioengine=libaio 00:46:14.482 direct=1 00:46:14.482 bs=4096 00:46:14.482 iodepth=128 00:46:14.482 norandommap=0 00:46:14.482 numjobs=1 00:46:14.482 00:46:14.482 verify_dump=1 00:46:14.482 verify_backlog=512 00:46:14.482 verify_state_save=0 00:46:14.482 do_verify=1 00:46:14.482 verify=crc32c-intel 00:46:14.482 [job0] 00:46:14.482 filename=/dev/nvme0n1 00:46:14.482 Could not set queue depth (nvme0n1) 00:46:14.482 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:46:14.482 fio-3.35 00:46:14.482 Starting 1 thread 00:46:15.050 20:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:46:15.309 20:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:46:15.569 20:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:46:15.569 20:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:46:15.569 20:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:46:15.569 20:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:46:15.569 20:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:46:15.569 20:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:46:15.569 20:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:46:15.569 20:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:46:15.569 20:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:46:15.569 20:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:46:15.569 20:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:46:15.569 20:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:46:15.569 20:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:46:16.507 20:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:46:16.507 20:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:46:16.507 20:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:46:16.507 20:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:46:16.767 20:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:46:17.026 20:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:46:17.026 20:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:46:17.026 20:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:46:17.026 20:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:46:17.026 20:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:46:17.026 20:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:46:17.026 20:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:46:17.026 20:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:46:17.026 20:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:46:17.026 20:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:46:17.026 20:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:46:17.026 20:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:46:17.026 20:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:46:18.413 20:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:46:18.413 20:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:46:18.413 20:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:46:18.413 20:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 103908 00:46:20.315 00:46:20.315 job0: (groupid=0, jobs=1): err= 0: pid=103929: Tue Oct 1 20:45:15 2024 00:46:20.315 read: IOPS=10.8k, BW=42.4MiB/s (44.4MB/s)(254MiB/6006msec) 00:46:20.315 slat (usec): min=7, max=5036, avg=52.08, stdev=233.11 00:46:20.315 clat (usec): min=1652, max=56383, avg=7915.58, stdev=1848.92 00:46:20.315 lat (usec): min=1666, max=56392, avg=7967.66, stdev=1858.09 00:46:20.315 clat percentiles (usec): 00:46:20.315 | 1.00th=[ 5211], 5.00th=[ 6259], 10.00th=[ 6652], 20.00th=[ 7111], 00:46:20.315 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7767], 60.00th=[ 8029], 00:46:20.315 | 70.00th=[ 8291], 80.00th=[ 8586], 90.00th=[ 9110], 95.00th=[ 9765], 00:46:20.315 | 99.00th=[11469], 99.50th=[11994], 99.90th=[47973], 99.95th=[53216], 00:46:20.315 | 99.99th=[55837] 00:46:20.315 bw ( KiB/s): min= 9232, max=29416, per=52.60%, avg=22820.55, stdev=6396.67, samples=11 00:46:20.315 iops : min= 2308, max= 7354, avg=5705.09, stdev=1599.16, samples=11 00:46:20.315 write: IOPS=6511, BW=25.4MiB/s (26.7MB/s)(137MiB/5385msec); 0 zone resets 00:46:20.315 slat (usec): min=15, max=3387, avg=63.83, stdev=150.27 00:46:20.315 clat (usec): min=1615, max=55683, avg=7018.51, stdev=1989.75 00:46:20.315 lat (usec): min=1660, max=55707, avg=7082.33, stdev=1991.70 00:46:20.315 clat percentiles (usec): 00:46:20.315 | 1.00th=[ 4228], 5.00th=[ 5604], 10.00th=[ 6063], 20.00th=[ 6456], 00:46:20.315 | 30.00th=[ 6652], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7111], 00:46:20.315 | 70.00th=[ 7308], 80.00th=[ 7504], 90.00th=[ 7767], 95.00th=[ 8029], 00:46:20.315 | 99.00th=[ 9896], 99.50th=[10945], 99.90th=[50594], 99.95th=[53216], 00:46:20.315 | 99.99th=[54264] 00:46:20.315 bw ( KiB/s): min= 9648, max=28672, per=87.78%, avg=22861.73, stdev=6062.89, samples=11 00:46:20.315 iops : min= 2412, max= 7168, avg=5715.27, stdev=1515.66, samples=11 00:46:20.315 lat (msec) : 2=0.02%, 4=0.35%, 10=96.41%, 20=3.10%, 50=0.04% 00:46:20.315 lat (msec) : 100=0.09% 00:46:20.315 cpu : usr=5.23%, sys=22.76%, ctx=6218, majf=0, minf=102 00:46:20.315 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:46:20.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:20.315 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:46:20.315 issued rwts: total=65145,35063,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:20.315 latency : target=0, window=0, percentile=100.00%, depth=128 00:46:20.315 00:46:20.315 Run status group 0 (all jobs): 00:46:20.315 READ: bw=42.4MiB/s (44.4MB/s), 42.4MiB/s-42.4MiB/s (44.4MB/s-44.4MB/s), io=254MiB (267MB), run=6006-6006msec 00:46:20.315 WRITE: bw=25.4MiB/s (26.7MB/s), 25.4MiB/s-25.4MiB/s (26.7MB/s-26.7MB/s), io=137MiB (144MB), run=5385-5385msec 00:46:20.315 00:46:20.315 Disk stats (read/write): 00:46:20.315 nvme0n1: ios=64394/34206, merge=0/0, ticks=470948/230096, in_queue=701044, util=98.66% 00:46:20.315 20:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:46:20.573 20:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:46:20.832 20:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:46:20.832 20:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:46:20.832 20:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:46:20.832 20:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:46:20.832 20:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:46:20.832 20:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:46:20.832 20:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:46:20.832 20:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:46:20.832 20:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:46:20.832 20:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:46:20.832 20:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:46:20.832 20:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:46:20.832 20:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:46:22.206 20:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:46:22.206 20:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:46:22.206 20:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:46:22.206 20:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:46:22.206 20:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=104055 00:46:22.206 20:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:46:22.206 20:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:46:22.206 [global] 00:46:22.206 thread=1 00:46:22.206 invalidate=1 00:46:22.206 rw=randrw 00:46:22.206 time_based=1 00:46:22.206 runtime=6 00:46:22.206 ioengine=libaio 00:46:22.206 direct=1 00:46:22.206 bs=4096 00:46:22.206 iodepth=128 00:46:22.206 norandommap=0 00:46:22.206 numjobs=1 00:46:22.206 00:46:22.206 verify_dump=1 00:46:22.206 verify_backlog=512 00:46:22.206 verify_state_save=0 00:46:22.206 do_verify=1 00:46:22.206 verify=crc32c-intel 00:46:22.206 [job0] 00:46:22.206 filename=/dev/nvme0n1 00:46:22.206 Could not set queue depth (nvme0n1) 00:46:22.206 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:46:22.206 fio-3.35 00:46:22.206 Starting 1 thread 00:46:23.140 20:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:46:23.398 20:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:46:23.656 20:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:46:23.656 20:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:46:23.656 20:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:46:23.656 20:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:46:23.656 20:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:46:23.656 20:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:46:23.656 20:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:46:23.656 20:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:46:23.656 20:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:46:23.656 20:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:46:23.656 20:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:46:23.656 20:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:46:23.656 20:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:46:24.595 20:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:46:24.595 20:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:46:24.595 20:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:46:24.595 20:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:46:24.854 20:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:46:25.113 20:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:46:25.113 20:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:46:25.113 20:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:46:25.113 20:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:46:25.113 20:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:46:25.113 20:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:46:25.113 20:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:46:25.113 20:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:46:25.113 20:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:46:25.113 20:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:46:25.113 20:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:46:25.113 20:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:46:25.113 20:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:46:26.527 20:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:46:26.527 20:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:46:26.527 20:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:46:26.527 20:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 104055 00:46:28.429 00:46:28.429 job0: (groupid=0, jobs=1): err= 0: pid=104076: Tue Oct 1 20:45:23 2024 00:46:28.429 read: IOPS=11.6k, BW=45.5MiB/s (47.7MB/s)(275MiB/6046msec) 00:46:28.429 slat (usec): min=2, max=6809, avg=42.47, stdev=196.53 00:46:28.429 clat (usec): min=604, max=52158, avg=7360.57, stdev=2440.49 00:46:28.429 lat (usec): min=649, max=52169, avg=7403.05, stdev=2454.25 00:46:28.429 clat percentiles (usec): 00:46:28.429 | 1.00th=[ 3195], 5.00th=[ 4293], 10.00th=[ 4883], 20.00th=[ 5735], 00:46:28.429 | 30.00th=[ 6718], 40.00th=[ 7308], 50.00th=[ 7570], 60.00th=[ 7832], 00:46:28.429 | 70.00th=[ 8160], 80.00th=[ 8586], 90.00th=[ 8979], 95.00th=[ 9372], 00:46:28.429 | 99.00th=[11863], 99.50th=[12649], 99.90th=[48497], 99.95th=[50070], 00:46:28.429 | 99.99th=[52167] 00:46:28.429 bw ( KiB/s): min=11520, max=39640, per=54.98%, avg=25606.00, stdev=8479.62, samples=12 00:46:28.429 iops : min= 2880, max= 9910, avg=6401.50, stdev=2119.91, samples=12 00:46:28.429 write: IOPS=7138, BW=27.9MiB/s (29.2MB/s)(150MiB/5378msec); 0 zone resets 00:46:28.429 slat (usec): min=12, max=1845, avg=54.73, stdev=118.79 00:46:28.429 clat (usec): min=398, max=14072, avg=6488.09, stdev=1652.48 00:46:28.429 lat (usec): min=577, max=14098, avg=6542.82, stdev=1667.60 00:46:28.429 clat percentiles (usec): 00:46:28.429 | 1.00th=[ 2802], 5.00th=[ 3589], 10.00th=[ 4015], 20.00th=[ 4686], 00:46:28.429 | 30.00th=[ 5473], 40.00th=[ 6587], 50.00th=[ 7046], 60.00th=[ 7373], 00:46:28.429 | 70.00th=[ 7570], 80.00th=[ 7832], 90.00th=[ 8160], 95.00th=[ 8455], 00:46:28.429 | 99.00th=[ 9372], 99.50th=[10683], 99.90th=[12256], 99.95th=[12911], 00:46:28.429 | 99.99th=[13304] 00:46:28.429 bw ( KiB/s): min=12072, max=38816, per=89.63%, avg=25594.00, stdev=8269.46, samples=12 00:46:28.429 iops : min= 3018, max= 9704, avg=6398.50, stdev=2067.36, samples=12 00:46:28.429 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:46:28.429 lat (msec) : 2=0.16%, 4=5.32%, 10=92.07%, 20=2.31%, 50=0.07% 00:46:28.429 lat (msec) : 100=0.04% 00:46:28.429 cpu : usr=6.05%, sys=23.79%, ctx=10127, majf=0, minf=127 00:46:28.429 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:46:28.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:28.429 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:46:28.429 issued rwts: total=70399,38391,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:28.429 latency : target=0, window=0, percentile=100.00%, depth=128 00:46:28.429 00:46:28.429 Run status group 0 (all jobs): 00:46:28.429 READ: bw=45.5MiB/s (47.7MB/s), 45.5MiB/s-45.5MiB/s (47.7MB/s-47.7MB/s), io=275MiB (288MB), run=6046-6046msec 00:46:28.429 WRITE: bw=27.9MiB/s (29.2MB/s), 27.9MiB/s-27.9MiB/s (29.2MB/s-29.2MB/s), io=150MiB (157MB), run=5378-5378msec 00:46:28.429 00:46:28.429 Disk stats (read/write): 00:46:28.429 nvme0n1: ios=69372/37919, merge=0/0, ticks=471272/234210, in_queue=705482, util=98.67% 00:46:28.429 20:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:46:28.429 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:46:28.429 20:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:46:28.429 20:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:46:28.429 20:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:46:28.429 20:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:46:28.429 20:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:46:28.429 20:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:46:28.429 20:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:46:28.429 20:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:28.688 20:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:46:28.688 20:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:46:28.688 20:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:46:28.688 20:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:46:28.688 20:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:46:28.688 20:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:46:28.688 20:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:46:28.688 20:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:46:28.688 20:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:46:28.688 20:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:46:28.688 rmmod nvme_tcp 00:46:28.688 rmmod nvme_fabrics 00:46:28.688 rmmod nvme_keyring 00:46:28.688 20:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:46:28.688 20:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:46:28.688 20:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:46:28.688 20:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n 103769 ']' 00:46:28.688 20:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # killprocess 103769 00:46:28.688 20:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@950 -- # '[' -z 103769 ']' 00:46:28.688 20:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@954 -- # kill -0 103769 00:46:28.688 20:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@955 -- # uname 00:46:28.688 20:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:28.688 20:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 103769 00:46:28.688 20:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:46:28.688 20:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:46:28.688 killing process with pid 103769 00:46:28.688 20:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 103769' 00:46:28.688 20:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@969 -- # kill 103769 00:46:28.688 20:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@974 -- # wait 103769 00:46:28.947 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:46:28.947 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:46:28.947 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:46:28.947 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:46:28.947 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:46:28.947 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:46:28.947 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:46:28.947 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:46:28.947 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:46:28.947 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:46:28.947 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:46:28.947 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:46:28.947 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:46:29.206 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:46:29.206 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:46:29.206 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:46:29.206 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:46:29.206 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:46:29.206 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:46:29.206 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:46:29.206 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:46:29.206 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:46:29.206 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:46:29.206 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:29.206 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:29.206 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:29.206 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:46:29.206 ************************************ 00:46:29.206 END TEST nvmf_target_multipath 00:46:29.206 ************************************ 00:46:29.206 00:46:29.206 real 0m20.808s 00:46:29.206 user 1m9.435s 00:46:29.206 sys 0m10.366s 00:46:29.206 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:29.206 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:46:29.206 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:46:29.206 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:46:29.206 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:29.206 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:46:29.206 ************************************ 00:46:29.206 START TEST nvmf_zcopy 00:46:29.206 ************************************ 00:46:29.206 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:46:29.465 * Looking for test storage... 00:46:29.465 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:46:29.465 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:46:29.465 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:46:29.465 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:46:29.465 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:46:29.465 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:29.465 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:29.465 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:29.465 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:46:29.465 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:46:29.465 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:46:29.465 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:46:29.465 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:46:29.465 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:46:29.465 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:46:29.465 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:29.465 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:46:29.465 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:46:29.465 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:29.465 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:29.465 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:46:29.465 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:46:29.465 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:29.465 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:46:29.465 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:46:29.465 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:46:29.465 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:46:29.465 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:29.465 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:46:29.465 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:46:29.465 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:29.465 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:29.465 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:46:29.465 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:29.465 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:46:29.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:29.465 --rc genhtml_branch_coverage=1 00:46:29.465 --rc genhtml_function_coverage=1 00:46:29.465 --rc genhtml_legend=1 00:46:29.465 --rc geninfo_all_blocks=1 00:46:29.465 --rc geninfo_unexecuted_blocks=1 00:46:29.465 00:46:29.465 ' 00:46:29.465 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:46:29.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:29.465 --rc genhtml_branch_coverage=1 00:46:29.465 --rc genhtml_function_coverage=1 00:46:29.465 --rc genhtml_legend=1 00:46:29.465 --rc geninfo_all_blocks=1 00:46:29.465 --rc geninfo_unexecuted_blocks=1 00:46:29.465 00:46:29.465 ' 00:46:29.465 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:46:29.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:29.465 --rc genhtml_branch_coverage=1 00:46:29.466 --rc genhtml_function_coverage=1 00:46:29.466 --rc genhtml_legend=1 00:46:29.466 --rc geninfo_all_blocks=1 00:46:29.466 --rc geninfo_unexecuted_blocks=1 00:46:29.466 00:46:29.466 ' 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:46:29.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:29.466 --rc genhtml_branch_coverage=1 00:46:29.466 --rc genhtml_function_coverage=1 00:46:29.466 --rc genhtml_legend=1 00:46:29.466 --rc geninfo_all_blocks=1 00:46:29.466 --rc geninfo_unexecuted_blocks=1 00:46:29.466 00:46:29.466 ' 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@458 -- # nvmf_veth_init 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:46:29.466 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:29.467 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:46:29.467 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:46:29.467 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:46:29.467 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:46:29.467 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:46:29.467 Cannot find device "nvmf_init_br" 00:46:29.467 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:46:29.467 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:46:29.467 Cannot find device "nvmf_init_br2" 00:46:29.467 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:46:29.467 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:46:29.467 Cannot find device "nvmf_tgt_br" 00:46:29.467 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:46:29.467 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:46:29.467 Cannot find device "nvmf_tgt_br2" 00:46:29.467 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:46:29.467 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:46:29.467 Cannot find device "nvmf_init_br" 00:46:29.467 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:46:29.467 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:46:29.725 Cannot find device "nvmf_init_br2" 00:46:29.725 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:46:29.725 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:46:29.725 Cannot find device "nvmf_tgt_br" 00:46:29.725 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:46:29.725 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:46:29.725 Cannot find device "nvmf_tgt_br2" 00:46:29.725 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:46:29.725 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:46:29.725 Cannot find device "nvmf_br" 00:46:29.725 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:46:29.726 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:46:29.726 Cannot find device "nvmf_init_if" 00:46:29.726 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:46:29.726 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:46:29.726 Cannot find device "nvmf_init_if2" 00:46:29.726 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:46:29.726 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:46:29.726 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:46:29.726 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:46:29.726 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:46:29.726 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:46:29.726 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:46:29.726 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:46:29.726 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:46:29.726 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:46:29.726 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:46:29.726 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:46:29.726 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:46:29.726 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:46:29.726 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:46:29.726 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:46:29.726 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:46:29.726 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:46:29.726 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:46:29.726 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:46:29.726 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:46:29.726 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:46:29.726 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:46:29.726 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:46:29.726 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:46:29.726 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:46:29.726 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:46:29.726 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:46:29.726 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:46:29.726 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:46:29.726 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:46:29.984 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:46:29.984 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:46:29.984 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:46:29.984 20:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:46:29.984 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:46:29.984 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:46:29.984 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:46:29.984 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:46:29.984 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:46:29.984 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:46:29.984 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:46:29.984 00:46:29.984 --- 10.0.0.3 ping statistics --- 00:46:29.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:29.984 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:46:29.984 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:46:29.984 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:46:29.984 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:46:29.984 00:46:29.984 --- 10.0.0.4 ping statistics --- 00:46:29.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:29.984 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:46:29.984 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:46:29.984 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:29.984 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:46:29.984 00:46:29.984 --- 10.0.0.1 ping statistics --- 00:46:29.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:29.984 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:46:29.984 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:46:29.984 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:29.984 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:46:29.984 00:46:29.984 --- 10.0.0.2 ping statistics --- 00:46:29.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:29.984 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:46:29.984 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:29.984 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@459 -- # return 0 00:46:29.984 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:46:29.984 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:29.984 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:46:29.984 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:46:29.984 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:29.984 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:46:29.984 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:46:29.984 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:46:29.985 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:46:29.985 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:46:29.985 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:46:29.985 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=104409 00:46:29.985 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:46:29.985 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 104409 00:46:29.985 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 104409 ']' 00:46:29.985 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:29.985 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:46:29.985 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:29.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:29.985 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:46:29.985 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:46:29.985 [2024-10-01 20:45:25.128975] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:46:29.985 [2024-10-01 20:45:25.130279] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:46:29.985 [2024-10-01 20:45:25.130351] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:30.243 [2024-10-01 20:45:25.271374] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:30.243 [2024-10-01 20:45:25.342457] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:30.243 [2024-10-01 20:45:25.342521] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:30.243 [2024-10-01 20:45:25.342535] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:30.243 [2024-10-01 20:45:25.342545] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:30.243 [2024-10-01 20:45:25.342554] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:30.243 [2024-10-01 20:45:25.342597] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:46:30.243 [2024-10-01 20:45:25.459065] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:46:30.243 [2024-10-01 20:45:25.459434] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:46:30.243 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:46:30.243 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:46:30.243 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:46:30.243 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:46:30.243 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:46:30.501 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:30.502 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:46:30.502 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:46:30.502 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:30.502 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:46:30.502 [2024-10-01 20:45:25.547234] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:30.502 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:30.502 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:46:30.502 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:30.502 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:46:30.502 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:30.502 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:46:30.502 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:30.502 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:46:30.502 [2024-10-01 20:45:25.563599] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:46:30.502 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:30.502 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:46:30.502 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:30.502 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:46:30.502 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:30.502 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:46:30.502 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:30.502 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:46:30.502 malloc0 00:46:30.502 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:30.502 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:46:30.502 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:30.502 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:46:30.502 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:30.502 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:46:30.502 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:46:30.502 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:46:30.502 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:46:30.502 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:46:30.502 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:46:30.502 { 00:46:30.502 "params": { 00:46:30.502 "name": "Nvme$subsystem", 00:46:30.502 "trtype": "$TEST_TRANSPORT", 00:46:30.502 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:30.502 "adrfam": "ipv4", 00:46:30.502 "trsvcid": "$NVMF_PORT", 00:46:30.502 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:30.502 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:30.502 "hdgst": ${hdgst:-false}, 00:46:30.502 "ddgst": ${ddgst:-false} 00:46:30.502 }, 00:46:30.502 "method": "bdev_nvme_attach_controller" 00:46:30.502 } 00:46:30.502 EOF 00:46:30.502 )") 00:46:30.502 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:46:30.502 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:46:30.502 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:46:30.502 20:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:46:30.502 "params": { 00:46:30.502 "name": "Nvme1", 00:46:30.502 "trtype": "tcp", 00:46:30.502 "traddr": "10.0.0.3", 00:46:30.502 "adrfam": "ipv4", 00:46:30.502 "trsvcid": "4420", 00:46:30.502 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:30.502 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:30.502 "hdgst": false, 00:46:30.502 "ddgst": false 00:46:30.502 }, 00:46:30.502 "method": "bdev_nvme_attach_controller" 00:46:30.502 }' 00:46:30.502 [2024-10-01 20:45:25.662337] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:46:30.502 [2024-10-01 20:45:25.662433] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104445 ] 00:46:30.760 [2024-10-01 20:45:25.804276] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:30.760 [2024-10-01 20:45:25.877334] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:46:31.019 Running I/O for 10 seconds... 00:46:40.897 5865.00 IOPS, 45.82 MiB/s 5922.50 IOPS, 46.27 MiB/s 5917.33 IOPS, 46.23 MiB/s 5888.25 IOPS, 46.00 MiB/s 5888.60 IOPS, 46.00 MiB/s 5915.50 IOPS, 46.21 MiB/s 5951.86 IOPS, 46.50 MiB/s 5966.38 IOPS, 46.61 MiB/s 5992.33 IOPS, 46.82 MiB/s 5995.50 IOPS, 46.84 MiB/s 00:46:40.897 Latency(us) 00:46:40.897 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:40.897 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:46:40.897 Verification LBA range: start 0x0 length 0x1000 00:46:40.897 Nvme1n1 : 10.01 5999.37 46.87 0.00 0.00 21267.69 2010.76 32172.22 00:46:40.897 =================================================================================================================== 00:46:40.898 Total : 5999.37 46.87 0.00 0.00 21267.69 2010.76 32172.22 00:46:41.157 20:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=104559 00:46:41.157 20:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:46:41.157 20:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:46:41.157 20:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:46:41.157 20:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:46:41.157 20:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:46:41.157 20:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:46:41.157 20:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:46:41.157 20:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:46:41.157 { 00:46:41.157 "params": { 00:46:41.157 "name": "Nvme$subsystem", 00:46:41.157 "trtype": "$TEST_TRANSPORT", 00:46:41.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:41.157 "adrfam": "ipv4", 00:46:41.157 "trsvcid": "$NVMF_PORT", 00:46:41.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:41.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:41.157 "hdgst": ${hdgst:-false}, 00:46:41.157 "ddgst": ${ddgst:-false} 00:46:41.157 }, 00:46:41.157 "method": "bdev_nvme_attach_controller" 00:46:41.157 } 00:46:41.157 EOF 00:46:41.157 )") 00:46:41.157 20:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:46:41.157 20:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:46:41.157 [2024-10-01 20:45:36.307231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.157 [2024-10-01 20:45:36.307267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.157 20:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:46:41.157 20:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:46:41.157 "params": { 00:46:41.157 "name": "Nvme1", 00:46:41.157 "trtype": "tcp", 00:46:41.157 "traddr": "10.0.0.3", 00:46:41.157 "adrfam": "ipv4", 00:46:41.157 "trsvcid": "4420", 00:46:41.157 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:41.157 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:41.157 "hdgst": false, 00:46:41.157 "ddgst": false 00:46:41.157 }, 00:46:41.158 "method": "bdev_nvme_attach_controller" 00:46:41.158 }' 00:46:41.158 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.158 [2024-10-01 20:45:36.319222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.158 [2024-10-01 20:45:36.319248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.158 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.158 [2024-10-01 20:45:36.331155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.158 [2024-10-01 20:45:36.331181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.158 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.158 [2024-10-01 20:45:36.343208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.158 [2024-10-01 20:45:36.343238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.158 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.158 [2024-10-01 20:45:36.355092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.158 [2024-10-01 20:45:36.355148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.158 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.158 [2024-10-01 20:45:36.360338] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:46:41.158 [2024-10-01 20:45:36.360432] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104559 ] 00:46:41.158 [2024-10-01 20:45:36.367211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.158 [2024-10-01 20:45:36.367239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.158 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.158 [2024-10-01 20:45:36.379239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.158 [2024-10-01 20:45:36.379267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.158 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.158 [2024-10-01 20:45:36.391173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.158 [2024-10-01 20:45:36.391199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.158 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.158 [2024-10-01 20:45:36.403138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.158 [2024-10-01 20:45:36.403161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.158 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.417 [2024-10-01 20:45:36.415148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.417 [2024-10-01 20:45:36.415170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.417 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.417 [2024-10-01 20:45:36.427114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.417 [2024-10-01 20:45:36.427143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.418 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.418 [2024-10-01 20:45:36.439111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.418 [2024-10-01 20:45:36.439140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.418 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.418 [2024-10-01 20:45:36.451164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.418 [2024-10-01 20:45:36.451186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.418 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.418 [2024-10-01 20:45:36.463112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.418 [2024-10-01 20:45:36.463141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.418 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.418 [2024-10-01 20:45:36.475110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.418 [2024-10-01 20:45:36.475138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.418 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.418 [2024-10-01 20:45:36.487112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.418 [2024-10-01 20:45:36.487140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.418 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.418 [2024-10-01 20:45:36.499105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.418 [2024-10-01 20:45:36.499149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.418 [2024-10-01 20:45:36.500951] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:41.418 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.418 [2024-10-01 20:45:36.511173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.418 [2024-10-01 20:45:36.511201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.418 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.418 [2024-10-01 20:45:36.523113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.418 [2024-10-01 20:45:36.523159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.418 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.418 [2024-10-01 20:45:36.535159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.418 [2024-10-01 20:45:36.535187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.418 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.418 [2024-10-01 20:45:36.547176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.418 [2024-10-01 20:45:36.547202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.418 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.418 [2024-10-01 20:45:36.554293] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:46:41.418 [2024-10-01 20:45:36.559146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.418 [2024-10-01 20:45:36.559170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.418 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.418 [2024-10-01 20:45:36.571250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.418 [2024-10-01 20:45:36.571302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.418 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.418 [2024-10-01 20:45:36.583202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.418 [2024-10-01 20:45:36.583235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.418 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.418 [2024-10-01 20:45:36.595223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.418 [2024-10-01 20:45:36.595279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.418 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.418 [2024-10-01 20:45:36.607228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.418 [2024-10-01 20:45:36.607279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.418 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.418 [2024-10-01 20:45:36.619276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.418 [2024-10-01 20:45:36.619308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.418 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.418 [2024-10-01 20:45:36.631135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.418 [2024-10-01 20:45:36.631161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.418 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.418 [2024-10-01 20:45:36.643150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.418 [2024-10-01 20:45:36.643180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.418 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.418 [2024-10-01 20:45:36.655155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.418 [2024-10-01 20:45:36.655186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.418 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.418 [2024-10-01 20:45:36.667149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.418 [2024-10-01 20:45:36.667170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.678 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.679 [2024-10-01 20:45:36.679123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.679 [2024-10-01 20:45:36.679157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.679 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.679 [2024-10-01 20:45:36.691146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.679 [2024-10-01 20:45:36.691168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.679 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.679 [2024-10-01 20:45:36.703137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.679 [2024-10-01 20:45:36.703159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.679 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.679 [2024-10-01 20:45:36.715154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.679 [2024-10-01 20:45:36.715172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.679 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.679 [2024-10-01 20:45:36.727193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.679 [2024-10-01 20:45:36.727220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.679 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.679 [2024-10-01 20:45:36.739117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.679 [2024-10-01 20:45:36.739151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.679 Running I/O for 5 seconds... 00:46:41.679 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.679 [2024-10-01 20:45:36.757072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.679 [2024-10-01 20:45:36.757118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.679 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.679 [2024-10-01 20:45:36.772058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.679 [2024-10-01 20:45:36.772091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.679 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.679 [2024-10-01 20:45:36.790002] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.679 [2024-10-01 20:45:36.790045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.679 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.679 [2024-10-01 20:45:36.803602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.679 [2024-10-01 20:45:36.803645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.679 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.679 [2024-10-01 20:45:36.822078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.679 [2024-10-01 20:45:36.822121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.679 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.679 [2024-10-01 20:45:36.835568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.679 [2024-10-01 20:45:36.835612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.679 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.679 [2024-10-01 20:45:36.854266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.679 [2024-10-01 20:45:36.854309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.679 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.679 [2024-10-01 20:45:36.872985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.679 [2024-10-01 20:45:36.873028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.679 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.679 [2024-10-01 20:45:36.889201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.679 [2024-10-01 20:45:36.889255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.679 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.679 [2024-10-01 20:45:36.904731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.679 [2024-10-01 20:45:36.904791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.679 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.679 [2024-10-01 20:45:36.921064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.679 [2024-10-01 20:45:36.921107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.679 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.939 [2024-10-01 20:45:36.932172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.939 [2024-10-01 20:45:36.932201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.939 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.939 [2024-10-01 20:45:36.948503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.939 [2024-10-01 20:45:36.948549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.939 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.939 [2024-10-01 20:45:36.965033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.939 [2024-10-01 20:45:36.965080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.939 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.939 [2024-10-01 20:45:36.979826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.939 [2024-10-01 20:45:36.979870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.939 2024/10/01 20:45:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.939 [2024-10-01 20:45:36.996730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.939 [2024-10-01 20:45:36.996759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.939 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.939 [2024-10-01 20:45:37.013935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.939 [2024-10-01 20:45:37.013979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.939 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.939 [2024-10-01 20:45:37.024192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.939 [2024-10-01 20:45:37.024218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.939 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.939 [2024-10-01 20:45:37.040355] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.939 [2024-10-01 20:45:37.040399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.939 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.939 [2024-10-01 20:45:37.055578] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.939 [2024-10-01 20:45:37.055623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.939 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.939 [2024-10-01 20:45:37.074215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.939 [2024-10-01 20:45:37.074249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.939 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.939 [2024-10-01 20:45:37.087746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.939 [2024-10-01 20:45:37.087789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.939 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.939 [2024-10-01 20:45:37.105947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.939 [2024-10-01 20:45:37.106005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.939 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.939 [2024-10-01 20:45:37.117282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.939 [2024-10-01 20:45:37.117311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.939 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.939 [2024-10-01 20:45:37.132382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.939 [2024-10-01 20:45:37.132425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.939 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.939 [2024-10-01 20:45:37.150729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.939 [2024-10-01 20:45:37.150774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.939 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.939 [2024-10-01 20:45:37.170052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.939 [2024-10-01 20:45:37.170096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.939 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:41.940 [2024-10-01 20:45:37.183899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:41.940 [2024-10-01 20:45:37.183968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:41.940 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.198 [2024-10-01 20:45:37.200945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.198 [2024-10-01 20:45:37.200989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.198 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.198 [2024-10-01 20:45:37.215698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.198 [2024-10-01 20:45:37.215741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.198 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.198 [2024-10-01 20:45:37.234208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.199 [2024-10-01 20:45:37.234234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.199 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.199 [2024-10-01 20:45:37.247892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.199 [2024-10-01 20:45:37.247958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.199 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.199 [2024-10-01 20:45:37.265804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.199 [2024-10-01 20:45:37.265848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.199 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.199 [2024-10-01 20:45:37.279204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.199 [2024-10-01 20:45:37.279247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.199 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.199 [2024-10-01 20:45:37.298481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.199 [2024-10-01 20:45:37.298523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.199 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.199 [2024-10-01 20:45:37.310852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.199 [2024-10-01 20:45:37.310895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.199 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.199 [2024-10-01 20:45:37.330199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.199 [2024-10-01 20:45:37.330225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.199 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.199 [2024-10-01 20:45:37.344185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.199 [2024-10-01 20:45:37.344215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.199 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.199 [2024-10-01 20:45:37.362517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.199 [2024-10-01 20:45:37.362559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.199 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.199 [2024-10-01 20:45:37.382516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.199 [2024-10-01 20:45:37.382560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.199 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.199 [2024-10-01 20:45:37.398972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.199 [2024-10-01 20:45:37.399020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.199 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.199 [2024-10-01 20:45:37.417442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.199 [2024-10-01 20:45:37.417486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.199 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.199 [2024-10-01 20:45:37.431962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.199 [2024-10-01 20:45:37.432007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.199 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.199 [2024-10-01 20:45:37.450981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.199 [2024-10-01 20:45:37.451030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.458 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.458 [2024-10-01 20:45:37.467167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.458 [2024-10-01 20:45:37.467209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.458 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.458 [2024-10-01 20:45:37.477372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.458 [2024-10-01 20:45:37.477401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.458 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.458 [2024-10-01 20:45:37.491778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.458 [2024-10-01 20:45:37.491822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.458 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.458 [2024-10-01 20:45:37.507173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.458 [2024-10-01 20:45:37.507216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.458 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.459 [2024-10-01 20:45:37.516750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.459 [2024-10-01 20:45:37.516792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.459 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.459 [2024-10-01 20:45:37.530311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.459 [2024-10-01 20:45:37.530341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.459 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.459 [2024-10-01 20:45:37.543753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.459 [2024-10-01 20:45:37.543785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.459 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.459 [2024-10-01 20:45:37.563450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.459 [2024-10-01 20:45:37.563482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.459 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.459 [2024-10-01 20:45:37.574223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.459 [2024-10-01 20:45:37.574278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.459 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.459 [2024-10-01 20:45:37.586356] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.459 [2024-10-01 20:45:37.586403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.459 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.459 [2024-10-01 20:45:37.599831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.459 [2024-10-01 20:45:37.599865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.459 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.459 [2024-10-01 20:45:37.618394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.459 [2024-10-01 20:45:37.618428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.459 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.459 [2024-10-01 20:45:37.629395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.459 [2024-10-01 20:45:37.629428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.459 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.459 [2024-10-01 20:45:37.643709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.459 [2024-10-01 20:45:37.643741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.459 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.459 [2024-10-01 20:45:37.659699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.459 [2024-10-01 20:45:37.659732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.459 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.459 [2024-10-01 20:45:37.676610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.459 [2024-10-01 20:45:37.676814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.459 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.459 [2024-10-01 20:45:37.692932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.459 [2024-10-01 20:45:37.693125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.459 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.459 [2024-10-01 20:45:37.708546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.459 [2024-10-01 20:45:37.708734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.719 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.719 [2024-10-01 20:45:37.723384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.719 [2024-10-01 20:45:37.723593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.719 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.719 [2024-10-01 20:45:37.741822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.719 [2024-10-01 20:45:37.742004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.719 11355.00 IOPS, 88.71 MiB/s 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.719 [2024-10-01 20:45:37.761506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.719 [2024-10-01 20:45:37.761689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.719 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.719 [2024-10-01 20:45:37.775551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.719 [2024-10-01 20:45:37.775727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.719 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.719 [2024-10-01 20:45:37.795645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.719 [2024-10-01 20:45:37.795826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.719 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.719 [2024-10-01 20:45:37.815097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.719 [2024-10-01 20:45:37.815160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.719 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.719 [2024-10-01 20:45:37.831323] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.719 [2024-10-01 20:45:37.831355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.719 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.719 [2024-10-01 20:45:37.850076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.719 [2024-10-01 20:45:37.850138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.719 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.719 [2024-10-01 20:45:37.869082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.719 [2024-10-01 20:45:37.869125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.719 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.719 [2024-10-01 20:45:37.884088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.719 [2024-10-01 20:45:37.884141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.719 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.719 [2024-10-01 20:45:37.902032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.719 [2024-10-01 20:45:37.902226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.719 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.719 [2024-10-01 20:45:37.921392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.719 [2024-10-01 20:45:37.921425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.719 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.719 [2024-10-01 20:45:37.935540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.720 [2024-10-01 20:45:37.935571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.720 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.720 [2024-10-01 20:45:37.953792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.720 [2024-10-01 20:45:37.953824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.720 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.720 [2024-10-01 20:45:37.966594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.720 [2024-10-01 20:45:37.966628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.720 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.979 [2024-10-01 20:45:37.986572] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.979 [2024-10-01 20:45:37.986603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.979 2024/10/01 20:45:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.979 [2024-10-01 20:45:38.004706] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.979 [2024-10-01 20:45:38.004883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.979 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.979 [2024-10-01 20:45:38.020587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.979 [2024-10-01 20:45:38.020620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.979 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.979 [2024-10-01 20:45:38.037798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.979 [2024-10-01 20:45:38.037857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.979 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.979 [2024-10-01 20:45:38.051801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.979 [2024-10-01 20:45:38.051986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.979 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.979 [2024-10-01 20:45:38.068777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.979 [2024-10-01 20:45:38.068807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.979 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.979 [2024-10-01 20:45:38.084332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.979 [2024-10-01 20:45:38.084380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.979 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.979 [2024-10-01 20:45:38.100951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.979 [2024-10-01 20:45:38.100983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.979 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.979 [2024-10-01 20:45:38.117000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.979 [2024-10-01 20:45:38.117050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.979 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.979 [2024-10-01 20:45:38.132376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.979 [2024-10-01 20:45:38.132407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.979 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.979 [2024-10-01 20:45:38.149911] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.979 [2024-10-01 20:45:38.150091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.979 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.979 [2024-10-01 20:45:38.164190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.979 [2024-10-01 20:45:38.164225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.979 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.979 [2024-10-01 20:45:38.181908] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.979 [2024-10-01 20:45:38.182084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.980 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.980 [2024-10-01 20:45:38.195802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.980 [2024-10-01 20:45:38.195835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.980 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:42.980 [2024-10-01 20:45:38.214356] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:42.980 [2024-10-01 20:45:38.214390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:42.980 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.238 [2024-10-01 20:45:38.232179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.238 [2024-10-01 20:45:38.232216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.238 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.238 [2024-10-01 20:45:38.248925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.238 [2024-10-01 20:45:38.249126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.238 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.238 [2024-10-01 20:45:38.266879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.238 [2024-10-01 20:45:38.266911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.238 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.238 [2024-10-01 20:45:38.283210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.238 [2024-10-01 20:45:38.283238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.238 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.238 [2024-10-01 20:45:38.293391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.238 [2024-10-01 20:45:38.293423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.238 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.238 [2024-10-01 20:45:38.307819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.238 [2024-10-01 20:45:38.307850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.238 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.238 [2024-10-01 20:45:38.325669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.238 [2024-10-01 20:45:38.325702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.239 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.239 [2024-10-01 20:45:38.340461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.239 [2024-10-01 20:45:38.340660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.239 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.239 [2024-10-01 20:45:38.356602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.239 [2024-10-01 20:45:38.356634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.239 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.239 [2024-10-01 20:45:38.374564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.239 [2024-10-01 20:45:38.374613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.239 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.239 [2024-10-01 20:45:38.393081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.239 [2024-10-01 20:45:38.393149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.239 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.239 [2024-10-01 20:45:38.407774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.239 [2024-10-01 20:45:38.407818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.239 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.239 [2024-10-01 20:45:38.425570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.239 [2024-10-01 20:45:38.425598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.239 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.239 [2024-10-01 20:45:38.440107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.239 [2024-10-01 20:45:38.440147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.239 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.239 [2024-10-01 20:45:38.457941] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.239 [2024-10-01 20:45:38.457985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.239 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.239 [2024-10-01 20:45:38.472383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.239 [2024-10-01 20:45:38.472426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.239 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.239 [2024-10-01 20:45:38.488785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.239 [2024-10-01 20:45:38.488814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.498 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.498 [2024-10-01 20:45:38.503910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.498 [2024-10-01 20:45:38.503965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.498 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.498 [2024-10-01 20:45:38.521893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.498 [2024-10-01 20:45:38.521936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.498 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.498 [2024-10-01 20:45:38.534391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.498 [2024-10-01 20:45:38.534433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.498 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.498 [2024-10-01 20:45:38.553331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.498 [2024-10-01 20:45:38.553373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.498 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.498 [2024-10-01 20:45:38.567692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.498 [2024-10-01 20:45:38.567735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.498 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.498 [2024-10-01 20:45:38.586768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.498 [2024-10-01 20:45:38.586812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.498 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.498 [2024-10-01 20:45:38.604298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.498 [2024-10-01 20:45:38.604356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.498 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.498 [2024-10-01 20:45:38.622056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.498 [2024-10-01 20:45:38.622099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.498 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.498 [2024-10-01 20:45:38.634981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.498 [2024-10-01 20:45:38.635029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.498 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.498 [2024-10-01 20:45:38.654756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.498 [2024-10-01 20:45:38.654799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.498 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.498 [2024-10-01 20:45:38.667675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.498 [2024-10-01 20:45:38.667718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.498 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.498 [2024-10-01 20:45:38.686450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.498 [2024-10-01 20:45:38.686493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.498 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.498 [2024-10-01 20:45:38.697590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.498 [2024-10-01 20:45:38.697634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.498 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.498 [2024-10-01 20:45:38.712919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.498 [2024-10-01 20:45:38.712962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.498 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.498 [2024-10-01 20:45:38.728834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.498 [2024-10-01 20:45:38.728877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.498 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.498 11392.00 IOPS, 89.00 MiB/s [2024-10-01 20:45:38.746568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.498 [2024-10-01 20:45:38.746613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.758 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.758 [2024-10-01 20:45:38.764340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.758 [2024-10-01 20:45:38.764381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.758 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.758 [2024-10-01 20:45:38.783172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.758 [2024-10-01 20:45:38.783214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.758 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.758 [2024-10-01 20:45:38.800589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.758 [2024-10-01 20:45:38.800632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.758 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.758 [2024-10-01 20:45:38.816736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.758 [2024-10-01 20:45:38.816778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.758 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.758 [2024-10-01 20:45:38.834444] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.758 [2024-10-01 20:45:38.834472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.758 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.758 [2024-10-01 20:45:38.848430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.758 [2024-10-01 20:45:38.848473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.758 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.758 [2024-10-01 20:45:38.865603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.758 [2024-10-01 20:45:38.865646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.758 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.758 [2024-10-01 20:45:38.879491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.758 [2024-10-01 20:45:38.879535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.758 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.758 [2024-10-01 20:45:38.899660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.758 [2024-10-01 20:45:38.899704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.758 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.758 [2024-10-01 20:45:38.918898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.758 [2024-10-01 20:45:38.918941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.758 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.758 [2024-10-01 20:45:38.936204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.758 [2024-10-01 20:45:38.936234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.759 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.759 [2024-10-01 20:45:38.954177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.759 [2024-10-01 20:45:38.954218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.759 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.759 [2024-10-01 20:45:38.968152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.759 [2024-10-01 20:45:38.968183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.759 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.759 [2024-10-01 20:45:38.986204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.759 [2024-10-01 20:45:38.986247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.759 2024/10/01 20:45:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:43.759 [2024-10-01 20:45:39.006985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:43.759 [2024-10-01 20:45:39.007018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:43.759 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.028 [2024-10-01 20:45:39.024434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.028 [2024-10-01 20:45:39.024476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.028 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.028 [2024-10-01 20:45:39.041418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.028 [2024-10-01 20:45:39.041463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.028 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.028 [2024-10-01 20:45:39.055830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.028 [2024-10-01 20:45:39.055874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.028 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.028 [2024-10-01 20:45:39.072883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.028 [2024-10-01 20:45:39.072926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.028 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.028 [2024-10-01 20:45:39.088969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.028 [2024-10-01 20:45:39.088996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.028 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.028 [2024-10-01 20:45:39.103899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.028 [2024-10-01 20:45:39.103966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.028 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.028 [2024-10-01 20:45:39.122536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.028 [2024-10-01 20:45:39.122580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.028 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.028 [2024-10-01 20:45:39.141404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.028 [2024-10-01 20:45:39.141447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.028 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.028 [2024-10-01 20:45:39.155435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.028 [2024-10-01 20:45:39.155463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.028 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.028 [2024-10-01 20:45:39.175579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.028 [2024-10-01 20:45:39.175610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.028 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.028 [2024-10-01 20:45:39.191849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.028 [2024-10-01 20:45:39.191876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.029 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.029 [2024-10-01 20:45:39.208773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.029 [2024-10-01 20:45:39.208816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.029 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.029 [2024-10-01 20:45:39.226250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.029 [2024-10-01 20:45:39.226277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.029 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.029 [2024-10-01 20:45:39.246287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.029 [2024-10-01 20:45:39.246331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.029 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.029 [2024-10-01 20:45:39.259005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.029 [2024-10-01 20:45:39.259056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.029 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.301 [2024-10-01 20:45:39.280046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.301 [2024-10-01 20:45:39.280079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.301 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.301 [2024-10-01 20:45:39.296151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.301 [2024-10-01 20:45:39.296184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.302 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.302 [2024-10-01 20:45:39.314626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.302 [2024-10-01 20:45:39.314670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.302 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.302 [2024-10-01 20:45:39.332777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.302 [2024-10-01 20:45:39.332819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.302 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.302 [2024-10-01 20:45:39.347051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.302 [2024-10-01 20:45:39.347093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.302 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.302 [2024-10-01 20:45:39.357526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.302 [2024-10-01 20:45:39.357570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.302 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.302 [2024-10-01 20:45:39.372557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.302 [2024-10-01 20:45:39.372599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.302 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.302 [2024-10-01 20:45:39.388125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.302 [2024-10-01 20:45:39.388179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.302 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.302 [2024-10-01 20:45:39.405726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.302 [2024-10-01 20:45:39.405769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.302 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.302 [2024-10-01 20:45:39.426115] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.302 [2024-10-01 20:45:39.426184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.302 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.302 [2024-10-01 20:45:39.440059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.302 [2024-10-01 20:45:39.440103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.302 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.302 [2024-10-01 20:45:39.458395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.302 [2024-10-01 20:45:39.458439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.302 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.302 [2024-10-01 20:45:39.470802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.302 [2024-10-01 20:45:39.470862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.302 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.302 [2024-10-01 20:45:39.491912] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.302 [2024-10-01 20:45:39.491968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.302 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.302 [2024-10-01 20:45:39.508682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.302 [2024-10-01 20:45:39.508726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.302 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.302 [2024-10-01 20:45:39.524938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.302 [2024-10-01 20:45:39.524982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.302 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.302 [2024-10-01 20:45:39.541158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.302 [2024-10-01 20:45:39.541212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.302 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.562 [2024-10-01 20:45:39.557502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.562 [2024-10-01 20:45:39.557532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.562 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.562 [2024-10-01 20:45:39.568019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.562 [2024-10-01 20:45:39.568051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.562 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.562 [2024-10-01 20:45:39.584015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.562 [2024-10-01 20:45:39.584046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.562 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.562 [2024-10-01 20:45:39.600953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.562 [2024-10-01 20:45:39.600997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.562 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.562 [2024-10-01 20:45:39.617075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.562 [2024-10-01 20:45:39.617102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.563 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.563 [2024-10-01 20:45:39.632845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.563 [2024-10-01 20:45:39.632889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.563 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.563 [2024-10-01 20:45:39.647764] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.563 [2024-10-01 20:45:39.647811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.563 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.563 [2024-10-01 20:45:39.666751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.563 [2024-10-01 20:45:39.666797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.563 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.563 [2024-10-01 20:45:39.684569] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.563 [2024-10-01 20:45:39.684613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.563 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.563 [2024-10-01 20:45:39.699872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.563 [2024-10-01 20:45:39.699915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.563 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.563 [2024-10-01 20:45:39.717961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.563 [2024-10-01 20:45:39.717988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.563 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.563 [2024-10-01 20:45:39.730890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.563 [2024-10-01 20:45:39.730934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.563 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.563 11422.67 IOPS, 89.24 MiB/s [2024-10-01 20:45:39.749790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.563 [2024-10-01 20:45:39.749834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.563 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.563 [2024-10-01 20:45:39.770649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.563 [2024-10-01 20:45:39.770696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.563 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.563 [2024-10-01 20:45:39.789007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.563 [2024-10-01 20:45:39.789049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.563 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.563 [2024-10-01 20:45:39.803001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.563 [2024-10-01 20:45:39.803048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.563 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.823 [2024-10-01 20:45:39.823651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.823 [2024-10-01 20:45:39.823693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.823 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.823 [2024-10-01 20:45:39.839387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.823 [2024-10-01 20:45:39.839414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.823 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.823 [2024-10-01 20:45:39.858691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.823 [2024-10-01 20:45:39.858734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.823 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.823 [2024-10-01 20:45:39.872032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.823 [2024-10-01 20:45:39.872062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.823 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.823 [2024-10-01 20:45:39.890296] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.823 [2024-10-01 20:45:39.890324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.823 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.823 [2024-10-01 20:45:39.901243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.823 [2024-10-01 20:45:39.901271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.823 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.823 [2024-10-01 20:45:39.916645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.823 [2024-10-01 20:45:39.916703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.823 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.823 [2024-10-01 20:45:39.934043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.823 [2024-10-01 20:45:39.934085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.823 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.823 [2024-10-01 20:45:39.954622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.823 [2024-10-01 20:45:39.954664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.823 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.823 [2024-10-01 20:45:39.970557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.823 [2024-10-01 20:45:39.970599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.823 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.823 [2024-10-01 20:45:39.990863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.823 [2024-10-01 20:45:39.990906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.824 2024/10/01 20:45:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.824 [2024-10-01 20:45:40.009812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.824 [2024-10-01 20:45:40.009857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.824 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.824 [2024-10-01 20:45:40.029347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.824 [2024-10-01 20:45:40.029390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.824 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.824 [2024-10-01 20:45:40.042934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.824 [2024-10-01 20:45:40.042961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.824 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:44.824 [2024-10-01 20:45:40.061789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:44.824 [2024-10-01 20:45:40.061832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:44.824 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.083 [2024-10-01 20:45:40.083319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.083 [2024-10-01 20:45:40.083371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.083 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.083 [2024-10-01 20:45:40.100343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.083 [2024-10-01 20:45:40.100385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.083 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.083 [2024-10-01 20:45:40.118249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.083 [2024-10-01 20:45:40.118292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.083 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.083 [2024-10-01 20:45:40.138282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.083 [2024-10-01 20:45:40.138324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.083 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.083 [2024-10-01 20:45:40.149346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.083 [2024-10-01 20:45:40.149389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.083 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.083 [2024-10-01 20:45:40.163709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.083 [2024-10-01 20:45:40.163751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.083 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.083 [2024-10-01 20:45:40.181419] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.083 [2024-10-01 20:45:40.181447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.083 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.083 [2024-10-01 20:45:40.194873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.083 [2024-10-01 20:45:40.194917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.083 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.084 [2024-10-01 20:45:40.215047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.084 [2024-10-01 20:45:40.215089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.084 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.084 [2024-10-01 20:45:40.230634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.084 [2024-10-01 20:45:40.230676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.084 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.084 [2024-10-01 20:45:40.250389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.084 [2024-10-01 20:45:40.250432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.084 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.084 [2024-10-01 20:45:40.269576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.084 [2024-10-01 20:45:40.269618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.084 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.084 [2024-10-01 20:45:40.290573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.084 [2024-10-01 20:45:40.290630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.084 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.084 [2024-10-01 20:45:40.310220] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.084 [2024-10-01 20:45:40.310262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.084 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.084 [2024-10-01 20:45:40.324648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.084 [2024-10-01 20:45:40.324691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.084 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.343 [2024-10-01 20:45:40.341196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.343 [2024-10-01 20:45:40.341249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.343 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.343 [2024-10-01 20:45:40.356842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.343 [2024-10-01 20:45:40.356885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.343 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.343 [2024-10-01 20:45:40.372988] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.343 [2024-10-01 20:45:40.373031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.343 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.343 [2024-10-01 20:45:40.388459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.343 [2024-10-01 20:45:40.388500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.343 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.344 [2024-10-01 20:45:40.406868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.344 [2024-10-01 20:45:40.406913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.344 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.344 [2024-10-01 20:45:40.423891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.344 [2024-10-01 20:45:40.423918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.344 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.344 [2024-10-01 20:45:40.441738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.344 [2024-10-01 20:45:40.441781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.344 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.344 [2024-10-01 20:45:40.455085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.344 [2024-10-01 20:45:40.455142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.344 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.344 [2024-10-01 20:45:40.465346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.344 [2024-10-01 20:45:40.465374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.344 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.344 [2024-10-01 20:45:40.479513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.344 [2024-10-01 20:45:40.479556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.344 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.344 [2024-10-01 20:45:40.495451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.344 [2024-10-01 20:45:40.495494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.344 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.344 [2024-10-01 20:45:40.514166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.344 [2024-10-01 20:45:40.514209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.344 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.344 [2024-10-01 20:45:40.534569] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.344 [2024-10-01 20:45:40.534612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.344 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.344 [2024-10-01 20:45:40.552650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.344 [2024-10-01 20:45:40.552693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.344 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.344 [2024-10-01 20:45:40.568180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.344 [2024-10-01 20:45:40.568211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.344 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.344 [2024-10-01 20:45:40.584152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.344 [2024-10-01 20:45:40.584180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.344 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.604 [2024-10-01 20:45:40.601024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.604 [2024-10-01 20:45:40.601067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.604 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.604 [2024-10-01 20:45:40.616226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.604 [2024-10-01 20:45:40.616273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.604 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.604 [2024-10-01 20:45:40.632075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.604 [2024-10-01 20:45:40.632106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.604 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.604 [2024-10-01 20:45:40.649720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.604 [2024-10-01 20:45:40.649762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.604 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.604 [2024-10-01 20:45:40.670993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.604 [2024-10-01 20:45:40.671058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.604 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.604 [2024-10-01 20:45:40.688109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.604 [2024-10-01 20:45:40.688172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.604 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.604 [2024-10-01 20:45:40.704380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.604 [2024-10-01 20:45:40.704410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.604 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.604 [2024-10-01 20:45:40.721741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.604 [2024-10-01 20:45:40.721783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.605 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.605 [2024-10-01 20:45:40.734996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.605 [2024-10-01 20:45:40.735043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.605 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.605 11457.50 IOPS, 89.51 MiB/s [2024-10-01 20:45:40.753850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.605 [2024-10-01 20:45:40.753894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.605 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.605 [2024-10-01 20:45:40.767194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.605 [2024-10-01 20:45:40.767224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.605 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.605 [2024-10-01 20:45:40.787636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.605 [2024-10-01 20:45:40.787663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.605 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.605 [2024-10-01 20:45:40.807029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.605 [2024-10-01 20:45:40.807056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.605 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.605 [2024-10-01 20:45:40.823856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.605 [2024-10-01 20:45:40.823885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.605 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.605 [2024-10-01 20:45:40.842115] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.605 [2024-10-01 20:45:40.842180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.605 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.864 [2024-10-01 20:45:40.860064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.864 [2024-10-01 20:45:40.860096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.864 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.864 [2024-10-01 20:45:40.878948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.864 [2024-10-01 20:45:40.878993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.864 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.864 [2024-10-01 20:45:40.896133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.864 [2024-10-01 20:45:40.896163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.864 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.864 [2024-10-01 20:45:40.916543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.864 [2024-10-01 20:45:40.916589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.864 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.864 [2024-10-01 20:45:40.933249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.864 [2024-10-01 20:45:40.933303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.864 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.864 [2024-10-01 20:45:40.944309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.864 [2024-10-01 20:45:40.944339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.864 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.864 [2024-10-01 20:45:40.960759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.864 [2024-10-01 20:45:40.960805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.864 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.864 [2024-10-01 20:45:40.975632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.865 [2024-10-01 20:45:40.975663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.865 2024/10/01 20:45:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.865 [2024-10-01 20:45:40.996442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.865 [2024-10-01 20:45:40.996473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.865 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.865 [2024-10-01 20:45:41.011899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.865 [2024-10-01 20:45:41.011955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.865 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.865 [2024-10-01 20:45:41.029052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.865 [2024-10-01 20:45:41.029097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.865 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.865 [2024-10-01 20:45:41.044597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.865 [2024-10-01 20:45:41.044657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.865 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.865 [2024-10-01 20:45:41.061466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.865 [2024-10-01 20:45:41.061512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.865 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.865 [2024-10-01 20:45:41.083932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.865 [2024-10-01 20:45:41.083978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.865 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.865 [2024-10-01 20:45:41.101546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.865 [2024-10-01 20:45:41.101573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:45.865 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:45.865 [2024-10-01 20:45:41.114673] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:45.865 [2024-10-01 20:45:41.114703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.124 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.125 [2024-10-01 20:45:41.135158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.125 [2024-10-01 20:45:41.135196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.125 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.125 [2024-10-01 20:45:41.146770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.125 [2024-10-01 20:45:41.146797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.125 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.125 [2024-10-01 20:45:41.163963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.125 [2024-10-01 20:45:41.163993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.125 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.125 [2024-10-01 20:45:41.179455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.125 [2024-10-01 20:45:41.179499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.125 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.125 [2024-10-01 20:45:41.198795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.125 [2024-10-01 20:45:41.198822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.125 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.125 [2024-10-01 20:45:41.217126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.125 [2024-10-01 20:45:41.217180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.125 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.125 [2024-10-01 20:45:41.232767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.125 [2024-10-01 20:45:41.232797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.125 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.125 [2024-10-01 20:45:41.248688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.125 [2024-10-01 20:45:41.248732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.125 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.125 [2024-10-01 20:45:41.264419] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.125 [2024-10-01 20:45:41.264478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.125 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.125 [2024-10-01 20:45:41.281099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.125 [2024-10-01 20:45:41.281166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.125 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.125 [2024-10-01 20:45:41.295934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.125 [2024-10-01 20:45:41.295983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.125 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.125 [2024-10-01 20:45:41.313441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.125 [2024-10-01 20:45:41.313484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.125 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.125 [2024-10-01 20:45:41.328670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.125 [2024-10-01 20:45:41.328715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.125 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.125 [2024-10-01 20:45:41.344645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.125 [2024-10-01 20:45:41.344688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.125 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.125 [2024-10-01 20:45:41.363995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.125 [2024-10-01 20:45:41.364028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.125 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.385 [2024-10-01 20:45:41.381946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.385 [2024-10-01 20:45:41.382067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.385 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.385 [2024-10-01 20:45:41.403261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.385 [2024-10-01 20:45:41.403400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.385 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.385 [2024-10-01 20:45:41.416457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.385 [2024-10-01 20:45:41.416582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.385 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.385 [2024-10-01 20:45:41.434867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.385 [2024-10-01 20:45:41.434995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.385 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.385 [2024-10-01 20:45:41.452861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.385 [2024-10-01 20:45:41.452988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.385 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.385 [2024-10-01 20:45:41.475053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.385 [2024-10-01 20:45:41.475193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.385 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.385 [2024-10-01 20:45:41.490599] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.386 [2024-10-01 20:45:41.490740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.386 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.386 [2024-10-01 20:45:41.508862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.386 [2024-10-01 20:45:41.508995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.386 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.386 [2024-10-01 20:45:41.533030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.386 [2024-10-01 20:45:41.533181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.386 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.386 [2024-10-01 20:45:41.551322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.386 [2024-10-01 20:45:41.551405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.386 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.386 [2024-10-01 20:45:41.569563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.386 [2024-10-01 20:45:41.569643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.386 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.386 [2024-10-01 20:45:41.592324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.386 [2024-10-01 20:45:41.592473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.386 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.386 [2024-10-01 20:45:41.612106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.386 [2024-10-01 20:45:41.612253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.386 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.386 [2024-10-01 20:45:41.636683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.386 [2024-10-01 20:45:41.636815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.645 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.645 [2024-10-01 20:45:41.655632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.645 [2024-10-01 20:45:41.655759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.645 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.645 [2024-10-01 20:45:41.679419] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.645 [2024-10-01 20:45:41.679559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.645 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.645 [2024-10-01 20:45:41.704077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.645 [2024-10-01 20:45:41.704216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.645 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.645 [2024-10-01 20:45:41.728252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.645 [2024-10-01 20:45:41.728363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.645 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.645 11252.40 IOPS, 87.91 MiB/s [2024-10-01 20:45:41.748525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.645 [2024-10-01 20:45:41.748664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.645 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.645 00:46:46.645 Latency(us) 00:46:46.645 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:46.645 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:46:46.645 Nvme1n1 : 5.01 11250.32 87.89 0.00 0.00 11360.61 2651.23 20852.36 00:46:46.645 =================================================================================================================== 00:46:46.645 Total : 11250.32 87.89 0.00 0.00 11360.61 2651.23 20852.36 00:46:46.646 [2024-10-01 20:45:41.759195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.646 [2024-10-01 20:45:41.759272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.646 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.646 [2024-10-01 20:45:41.771207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.646 [2024-10-01 20:45:41.771244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.646 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.646 [2024-10-01 20:45:41.783159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.646 [2024-10-01 20:45:41.783217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.646 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.646 [2024-10-01 20:45:41.795166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.646 [2024-10-01 20:45:41.795221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.646 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.646 [2024-10-01 20:45:41.807255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.646 [2024-10-01 20:45:41.807320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.646 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.646 [2024-10-01 20:45:41.819296] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.646 [2024-10-01 20:45:41.819341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.646 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.646 [2024-10-01 20:45:41.831332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.646 [2024-10-01 20:45:41.831368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.646 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.646 [2024-10-01 20:45:41.843206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.646 [2024-10-01 20:45:41.843256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.646 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.646 [2024-10-01 20:45:41.855299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.646 [2024-10-01 20:45:41.855333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.646 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.646 [2024-10-01 20:45:41.867194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.646 [2024-10-01 20:45:41.867259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.646 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.646 [2024-10-01 20:45:41.879216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.646 [2024-10-01 20:45:41.879254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.646 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.646 [2024-10-01 20:45:41.891340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.646 [2024-10-01 20:45:41.891387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.646 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.905 [2024-10-01 20:45:41.903219] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.905 [2024-10-01 20:45:41.903244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.905 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.905 [2024-10-01 20:45:41.915092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.905 [2024-10-01 20:45:41.915138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.905 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.905 [2024-10-01 20:45:41.927187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.906 [2024-10-01 20:45:41.927218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.906 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.906 [2024-10-01 20:45:41.939214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.906 [2024-10-01 20:45:41.939258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.906 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.906 [2024-10-01 20:45:41.951220] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.906 [2024-10-01 20:45:41.951252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.906 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.906 [2024-10-01 20:45:41.963360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.906 [2024-10-01 20:45:41.963402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.906 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.906 [2024-10-01 20:45:41.975251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.906 [2024-10-01 20:45:41.975272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.906 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.906 [2024-10-01 20:45:41.987210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:46:46.906 [2024-10-01 20:45:41.987232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:46.906 2024/10/01 20:45:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:46.906 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (104559) - No such process 00:46:46.906 20:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 104559 00:46:46.906 20:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:46:46.906 20:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:46.906 20:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:46:46.906 20:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:46.906 20:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:46:46.906 20:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:46.906 20:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:46:46.906 delay0 00:46:46.906 20:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:46.906 20:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:46:46.906 20:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:46.906 20:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:46:46.906 20:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:46.906 20:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:46:47.166 [2024-10-01 20:45:42.182975] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:46:55.281 Initializing NVMe Controllers 00:46:55.281 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:46:55.281 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:46:55.281 Initialization complete. Launching workers. 00:46:55.281 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 269, failed: 17897 00:46:55.281 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 18048, failed to submit 118 00:46:55.281 success 17897, unsuccessful 151, failed 0 00:46:55.281 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:46:55.281 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:46:55.281 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:46:55.281 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:46:55.281 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:46:55.281 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:46:55.281 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:46:55.281 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:46:55.281 rmmod nvme_tcp 00:46:55.281 rmmod nvme_fabrics 00:46:55.281 rmmod nvme_keyring 00:46:55.281 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:46:55.281 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:46:55.281 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:46:55.281 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 104409 ']' 00:46:55.281 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 104409 00:46:55.281 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 104409 ']' 00:46:55.281 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 104409 00:46:55.281 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:46:55.281 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:55.281 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 104409 00:46:55.281 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:46:55.281 killing process with pid 104409 00:46:55.282 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:46:55.282 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 104409' 00:46:55.282 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 104409 00:46:55.282 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 104409 00:46:55.282 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:46:55.282 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:46:55.282 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:46:55.282 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:46:55.282 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:46:55.282 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:46:55.282 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:46:55.282 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:46:55.282 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:46:55.282 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:46:55.282 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:46:55.282 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:46:55.282 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:46:55.282 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:46:55.282 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:46:55.282 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:46:55.282 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:46:55.282 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:46:55.282 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:46:55.282 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:46:55.282 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:46:55.282 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:46:55.282 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:46:55.282 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:55.282 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:55.282 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:55.282 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:46:55.282 00:46:55.282 real 0m25.373s 00:46:55.282 user 0m39.078s 00:46:55.282 sys 0m8.406s 00:46:55.282 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:55.282 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:46:55.282 ************************************ 00:46:55.282 END TEST nvmf_zcopy 00:46:55.282 ************************************ 00:46:55.282 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:46:55.282 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:46:55.282 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:55.282 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:46:55.282 ************************************ 00:46:55.282 START TEST nvmf_nmic 00:46:55.282 ************************************ 00:46:55.282 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:46:55.282 * Looking for test storage... 00:46:55.282 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:46:55.282 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:46:55.282 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:46:55.282 20:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:46:55.282 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:46:55.282 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:55.282 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:55.282 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:55.282 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:46:55.282 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:46:55.282 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:46:55.282 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:46:55.282 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:46:55.282 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:46:55.282 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:46:55.282 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:55.282 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:46:55.282 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:46:55.282 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:55.282 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:55.282 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:46:55.282 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:46:55.282 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:55.282 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:46:55.282 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:46:55.282 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:46:55.282 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:46:55.282 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:55.282 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:46:55.282 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:46:55.282 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:55.282 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:55.282 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:46:55.282 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:55.282 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:46:55.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:55.282 --rc genhtml_branch_coverage=1 00:46:55.282 --rc genhtml_function_coverage=1 00:46:55.282 --rc genhtml_legend=1 00:46:55.282 --rc geninfo_all_blocks=1 00:46:55.282 --rc geninfo_unexecuted_blocks=1 00:46:55.282 00:46:55.283 ' 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:46:55.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:55.283 --rc genhtml_branch_coverage=1 00:46:55.283 --rc genhtml_function_coverage=1 00:46:55.283 --rc genhtml_legend=1 00:46:55.283 --rc geninfo_all_blocks=1 00:46:55.283 --rc geninfo_unexecuted_blocks=1 00:46:55.283 00:46:55.283 ' 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:46:55.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:55.283 --rc genhtml_branch_coverage=1 00:46:55.283 --rc genhtml_function_coverage=1 00:46:55.283 --rc genhtml_legend=1 00:46:55.283 --rc geninfo_all_blocks=1 00:46:55.283 --rc geninfo_unexecuted_blocks=1 00:46:55.283 00:46:55.283 ' 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:46:55.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:55.283 --rc genhtml_branch_coverage=1 00:46:55.283 --rc genhtml_function_coverage=1 00:46:55.283 --rc genhtml_legend=1 00:46:55.283 --rc geninfo_all_blocks=1 00:46:55.283 --rc geninfo_unexecuted_blocks=1 00:46:55.283 00:46:55.283 ' 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@458 -- # nvmf_veth_init 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:46:55.283 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:46:55.284 Cannot find device "nvmf_init_br" 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:46:55.284 Cannot find device "nvmf_init_br2" 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:46:55.284 Cannot find device "nvmf_tgt_br" 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:46:55.284 Cannot find device "nvmf_tgt_br2" 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:46:55.284 Cannot find device "nvmf_init_br" 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:46:55.284 Cannot find device "nvmf_init_br2" 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:46:55.284 Cannot find device "nvmf_tgt_br" 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:46:55.284 Cannot find device "nvmf_tgt_br2" 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:46:55.284 Cannot find device "nvmf_br" 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:46:55.284 Cannot find device "nvmf_init_if" 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:46:55.284 Cannot find device "nvmf_init_if2" 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:46:55.284 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:46:55.284 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:46:55.284 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:46:55.284 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:46:55.284 00:46:55.284 --- 10.0.0.3 ping statistics --- 00:46:55.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:55.284 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:46:55.284 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:46:55.284 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:46:55.284 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:46:55.284 00:46:55.284 --- 10.0.0.4 ping statistics --- 00:46:55.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:55.285 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:46:55.285 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:46:55.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:55.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:46:55.285 00:46:55.285 --- 10.0.0.1 ping statistics --- 00:46:55.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:55.285 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:46:55.285 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:46:55.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:55.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:46:55.285 00:46:55.285 --- 10.0.0.2 ping statistics --- 00:46:55.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:55.285 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:46:55.285 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:55.285 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@459 -- # return 0 00:46:55.285 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:46:55.285 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:55.285 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:46:55.285 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:46:55.285 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:55.285 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:46:55.285 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:46:55.285 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:46:55.285 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:46:55.285 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:46:55.285 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:46:55.285 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=104937 00:46:55.285 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 104937 00:46:55.285 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:46:55.285 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 104937 ']' 00:46:55.285 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:55.285 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:46:55.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:55.285 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:55.285 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:46:55.285 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:46:55.544 [2024-10-01 20:45:50.570765] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:46:55.544 [2024-10-01 20:45:50.572176] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:46:55.544 [2024-10-01 20:45:50.572484] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:55.544 [2024-10-01 20:45:50.714768] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:46:55.803 [2024-10-01 20:45:50.802287] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:55.803 [2024-10-01 20:45:50.802559] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:55.803 [2024-10-01 20:45:50.802849] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:55.803 [2024-10-01 20:45:50.803061] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:55.803 [2024-10-01 20:45:50.803251] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:55.803 [2024-10-01 20:45:50.803538] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:46:55.803 [2024-10-01 20:45:50.803656] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:46:55.803 [2024-10-01 20:45:50.803728] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:46:55.803 [2024-10-01 20:45:50.803722] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:46:55.803 [2024-10-01 20:45:50.916464] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:46:55.803 [2024-10-01 20:45:50.916487] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:46:55.803 [2024-10-01 20:45:50.916871] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:46:55.803 [2024-10-01 20:45:50.917107] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:46:55.803 [2024-10-01 20:45:50.917742] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:46:55.803 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:46:55.803 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:46:55.803 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:46:55.803 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:46:55.803 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:46:55.803 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:55.803 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:46:55.803 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:55.803 20:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:46:55.803 [2024-10-01 20:45:51.000832] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:55.803 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:55.803 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:46:55.803 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:55.803 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:46:55.803 Malloc0 00:46:55.803 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:55.803 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:46:55.803 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:55.804 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:46:55.804 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:55.804 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:46:55.804 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:55.804 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:46:55.804 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:55.804 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:46:55.804 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:55.804 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:46:56.063 [2024-10-01 20:45:51.056957] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:46:56.063 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:56.063 test case1: single bdev can't be used in multiple subsystems 00:46:56.063 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:46:56.063 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:46:56.063 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:56.063 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:46:56.063 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:56.063 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:46:56.063 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:56.063 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:46:56.063 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:56.063 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:46:56.063 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:46:56.063 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:56.063 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:46:56.063 [2024-10-01 20:45:51.084601] bdev.c:8241:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:46:56.063 [2024-10-01 20:45:51.084647] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:46:56.063 [2024-10-01 20:45:51.084661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:46:56.063 2024/10/01 20:45:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:46:56.063 request: 00:46:56.063 { 00:46:56.063 "method": "nvmf_subsystem_add_ns", 00:46:56.063 "params": { 00:46:56.063 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:46:56.063 "namespace": { 00:46:56.063 "bdev_name": "Malloc0", 00:46:56.063 "no_auto_visible": false 00:46:56.063 } 00:46:56.063 } 00:46:56.063 } 00:46:56.063 Got JSON-RPC error response 00:46:56.063 GoRPCClient: error on JSON-RPC call 00:46:56.063 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:46:56.063 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:46:56.063 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:46:56.063 Adding namespace failed - expected result. 00:46:56.063 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:46:56.063 test case2: host connect to nvmf target in multiple paths 00:46:56.063 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:46:56.063 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:46:56.063 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:56.063 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:46:56.064 [2024-10-01 20:45:51.096746] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:46:56.064 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:56.064 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid=e1b29161-2f6e-4e88-9903-c5274a848703 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:46:56.064 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid=e1b29161-2f6e-4e88-9903-c5274a848703 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:46:56.064 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:46:56.064 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:46:56.064 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:46:56.064 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:46:56.064 20:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:46:58.032 20:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:46:58.032 20:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:46:58.032 20:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:46:58.310 20:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:46:58.310 20:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:46:58.310 20:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:46:58.310 20:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:46:58.310 [global] 00:46:58.310 thread=1 00:46:58.310 invalidate=1 00:46:58.310 rw=write 00:46:58.310 time_based=1 00:46:58.310 runtime=1 00:46:58.310 ioengine=libaio 00:46:58.310 direct=1 00:46:58.310 bs=4096 00:46:58.310 iodepth=1 00:46:58.310 norandommap=0 00:46:58.310 numjobs=1 00:46:58.310 00:46:58.310 verify_dump=1 00:46:58.310 verify_backlog=512 00:46:58.310 verify_state_save=0 00:46:58.310 do_verify=1 00:46:58.310 verify=crc32c-intel 00:46:58.310 [job0] 00:46:58.310 filename=/dev/nvme0n1 00:46:58.310 Could not set queue depth (nvme0n1) 00:46:58.310 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:46:58.310 fio-3.35 00:46:58.310 Starting 1 thread 00:46:59.686 00:46:59.686 job0: (groupid=0, jobs=1): err= 0: pid=105028: Tue Oct 1 20:45:54 2024 00:46:59.686 read: IOPS=2734, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1001msec) 00:46:59.686 slat (nsec): min=12905, max=49656, avg=16344.26, stdev=4659.13 00:46:59.686 clat (usec): min=148, max=575, avg=182.07, stdev=29.76 00:46:59.686 lat (usec): min=162, max=591, avg=198.41, stdev=31.06 00:46:59.686 clat percentiles (usec): 00:46:59.686 | 1.00th=[ 153], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:46:59.686 | 30.00th=[ 163], 40.00th=[ 169], 50.00th=[ 178], 60.00th=[ 184], 00:46:59.686 | 70.00th=[ 190], 80.00th=[ 200], 90.00th=[ 212], 95.00th=[ 227], 00:46:59.686 | 99.00th=[ 285], 99.50th=[ 334], 99.90th=[ 529], 99.95th=[ 570], 00:46:59.686 | 99.99th=[ 578] 00:46:59.686 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:46:59.686 slat (nsec): min=18704, max=71021, avg=22661.12, stdev=5474.10 00:46:59.686 clat (usec): min=100, max=325, avg=122.69, stdev=18.21 00:46:59.686 lat (usec): min=119, max=349, avg=145.35, stdev=20.37 00:46:59.686 clat percentiles (usec): 00:46:59.686 | 1.00th=[ 103], 5.00th=[ 105], 10.00th=[ 106], 20.00th=[ 109], 00:46:59.686 | 30.00th=[ 111], 40.00th=[ 113], 50.00th=[ 118], 60.00th=[ 124], 00:46:59.686 | 70.00th=[ 129], 80.00th=[ 137], 90.00th=[ 147], 95.00th=[ 153], 00:46:59.686 | 99.00th=[ 172], 99.50th=[ 186], 99.90th=[ 281], 99.95th=[ 302], 00:46:59.686 | 99.99th=[ 326] 00:46:59.686 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:46:59.686 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:46:59.686 lat (usec) : 250=99.00%, 500=0.95%, 750=0.05% 00:46:59.686 cpu : usr=1.90%, sys=8.80%, ctx=5809, majf=0, minf=5 00:46:59.686 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:59.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:59.686 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:59.686 issued rwts: total=2737,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:59.686 latency : target=0, window=0, percentile=100.00%, depth=1 00:46:59.686 00:46:59.686 Run status group 0 (all jobs): 00:46:59.686 READ: bw=10.7MiB/s (11.2MB/s), 10.7MiB/s-10.7MiB/s (11.2MB/s-11.2MB/s), io=10.7MiB (11.2MB), run=1001-1001msec 00:46:59.686 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:46:59.686 00:46:59.686 Disk stats (read/write): 00:46:59.686 nvme0n1: ios=2610/2578, merge=0/0, ticks=636/371, in_queue=1007, util=95.49% 00:46:59.686 20:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:46:59.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:46:59.686 20:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:46:59.686 20:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:46:59.686 20:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:46:59.686 20:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:46:59.686 20:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:46:59.686 20:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:46:59.686 20:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:46:59.686 20:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:46:59.686 20:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:46:59.686 20:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:46:59.686 20:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:46:59.686 20:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:46:59.686 20:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:46:59.686 20:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:46:59.686 20:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:46:59.686 rmmod nvme_tcp 00:46:59.686 rmmod nvme_fabrics 00:46:59.686 rmmod nvme_keyring 00:46:59.686 20:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:46:59.686 20:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:46:59.686 20:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:46:59.686 20:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 104937 ']' 00:46:59.686 20:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 104937 00:46:59.686 20:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 104937 ']' 00:46:59.686 20:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 104937 00:46:59.686 20:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:46:59.686 20:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:59.686 20:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 104937 00:46:59.686 killing process with pid 104937 00:46:59.686 20:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:46:59.686 20:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:46:59.686 20:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 104937' 00:46:59.686 20:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 104937 00:46:59.686 20:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 104937 00:46:59.945 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:46:59.945 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:46:59.945 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:46:59.945 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:46:59.945 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:46:59.945 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:46:59.945 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:46:59.945 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:46:59.945 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:46:59.945 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:46:59.945 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:46:59.945 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:46:59.945 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:46:59.945 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:47:00.203 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:47:00.203 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:47:00.203 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:47:00.203 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:47:00.203 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:47:00.203 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:47:00.203 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:00.203 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:00.203 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:47:00.203 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:00.203 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:00.203 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:00.203 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:47:00.203 ************************************ 00:47:00.203 END TEST nvmf_nmic 00:47:00.203 ************************************ 00:47:00.203 00:47:00.203 real 0m5.489s 00:47:00.203 user 0m14.925s 00:47:00.203 sys 0m2.304s 00:47:00.203 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:47:00.203 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:47:00.203 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:47:00.203 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:47:00.203 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:47:00.203 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:47:00.203 ************************************ 00:47:00.203 START TEST nvmf_fio_target 00:47:00.203 ************************************ 00:47:00.203 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:47:00.463 * Looking for test storage... 00:47:00.463 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:47:00.463 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:47:00.463 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:47:00.463 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:47:00.463 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:47:00.463 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:00.463 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:00.463 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:00.463 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:47:00.463 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:47:00.463 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:47:00.463 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:47:00.463 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:47:00.463 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:47:00.463 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:47:00.463 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:00.463 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:47:00.463 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:47:00.463 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:00.463 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:00.463 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:47:00.463 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:47:00.463 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:00.463 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:47:00.463 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:47:00.463 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:47:00.463 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:47:00.463 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:00.463 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:47:00.463 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:47:00.463 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:00.463 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:47:00.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:00.464 --rc genhtml_branch_coverage=1 00:47:00.464 --rc genhtml_function_coverage=1 00:47:00.464 --rc genhtml_legend=1 00:47:00.464 --rc geninfo_all_blocks=1 00:47:00.464 --rc geninfo_unexecuted_blocks=1 00:47:00.464 00:47:00.464 ' 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:47:00.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:00.464 --rc genhtml_branch_coverage=1 00:47:00.464 --rc genhtml_function_coverage=1 00:47:00.464 --rc genhtml_legend=1 00:47:00.464 --rc geninfo_all_blocks=1 00:47:00.464 --rc geninfo_unexecuted_blocks=1 00:47:00.464 00:47:00.464 ' 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:47:00.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:00.464 --rc genhtml_branch_coverage=1 00:47:00.464 --rc genhtml_function_coverage=1 00:47:00.464 --rc genhtml_legend=1 00:47:00.464 --rc geninfo_all_blocks=1 00:47:00.464 --rc geninfo_unexecuted_blocks=1 00:47:00.464 00:47:00.464 ' 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:47:00.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:00.464 --rc genhtml_branch_coverage=1 00:47:00.464 --rc genhtml_function_coverage=1 00:47:00.464 --rc genhtml_legend=1 00:47:00.464 --rc geninfo_all_blocks=1 00:47:00.464 --rc geninfo_unexecuted_blocks=1 00:47:00.464 00:47:00.464 ' 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:00.464 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@458 -- # nvmf_veth_init 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:47:00.465 Cannot find device "nvmf_init_br" 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:47:00.465 Cannot find device "nvmf_init_br2" 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:47:00.465 Cannot find device "nvmf_tgt_br" 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:47:00.465 Cannot find device "nvmf_tgt_br2" 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:47:00.465 Cannot find device "nvmf_init_br" 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:47:00.465 Cannot find device "nvmf_init_br2" 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:47:00.465 Cannot find device "nvmf_tgt_br" 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:47:00.465 Cannot find device "nvmf_tgt_br2" 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:47:00.465 Cannot find device "nvmf_br" 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:47:00.465 Cannot find device "nvmf_init_if" 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:47:00.465 Cannot find device "nvmf_init_if2" 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:00.465 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:47:00.465 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:00.724 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:00.724 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:47:00.724 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:47:00.724 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:47:00.724 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:47:00.724 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:47:00.725 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:47:00.725 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:47:00.725 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:47:00.725 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:47:00.725 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:47:00.725 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:47:00.725 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:47:00.725 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:47:00.725 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:47:00.725 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:47:00.725 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:47:00.725 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:47:00.725 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:47:00.725 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:47:00.725 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:47:00.725 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:47:00.725 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:47:00.725 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:47:00.725 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:47:00.725 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:47:00.725 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:47:00.725 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:47:00.725 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:47:00.725 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:47:00.725 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:47:00.725 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:47:00.725 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:47:00.725 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:47:00.725 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:47:00.725 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:47:00.725 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.123 ms 00:47:00.725 00:47:00.725 --- 10.0.0.3 ping statistics --- 00:47:00.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:00.725 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:47:00.725 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:47:00.725 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:47:00.725 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:47:00.725 00:47:00.725 --- 10.0.0.4 ping statistics --- 00:47:00.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:00.725 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:47:00.725 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:47:00.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:00.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:47:00.725 00:47:00.725 --- 10.0.0.1 ping statistics --- 00:47:00.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:00.725 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:47:00.725 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:47:00.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:00.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:47:00.725 00:47:00.725 --- 10.0.0.2 ping statistics --- 00:47:00.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:00.725 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:47:00.725 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:00.725 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@459 -- # return 0 00:47:00.725 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:47:00.725 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:00.725 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:47:00.725 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:47:00.725 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:00.725 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:47:00.725 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:47:00.984 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:47:00.985 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:47:00.985 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:47:00.985 20:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:47:00.985 20:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=105257 00:47:00.985 20:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:47:00.985 20:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 105257 00:47:00.985 20:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 105257 ']' 00:47:00.985 20:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:00.985 20:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:47:00.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:00.985 20:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:00.985 20:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:47:00.985 20:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:47:00.985 [2024-10-01 20:45:56.056324] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:47:00.985 [2024-10-01 20:45:56.057741] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:47:00.985 [2024-10-01 20:45:56.057809] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:00.985 [2024-10-01 20:45:56.195471] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:47:01.243 [2024-10-01 20:45:56.256721] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:01.243 [2024-10-01 20:45:56.256801] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:01.243 [2024-10-01 20:45:56.256811] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:01.243 [2024-10-01 20:45:56.256818] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:01.243 [2024-10-01 20:45:56.256824] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:01.243 [2024-10-01 20:45:56.257007] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:47:01.243 [2024-10-01 20:45:56.258330] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:47:01.243 [2024-10-01 20:45:56.258323] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:47:01.243 [2024-10-01 20:45:56.258254] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:47:01.243 [2024-10-01 20:45:56.360434] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:47:01.243 [2024-10-01 20:45:56.361045] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:47:01.243 [2024-10-01 20:45:56.361160] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:47:01.243 [2024-10-01 20:45:56.361596] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:47:01.243 [2024-10-01 20:45:56.361921] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:47:01.243 20:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:47:01.243 20:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:47:01.243 20:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:47:01.243 20:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:47:01.243 20:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:47:01.243 20:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:01.243 20:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:47:01.502 [2024-10-01 20:45:56.671318] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:01.502 20:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:47:02.070 20:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:47:02.070 20:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:47:02.329 20:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:47:02.329 20:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:47:02.588 20:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:47:02.588 20:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:47:02.846 20:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:47:02.846 20:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:47:03.105 20:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:47:03.364 20:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:47:03.364 20:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:47:03.623 20:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:47:03.623 20:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:47:03.882 20:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:47:03.882 20:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:47:04.140 20:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:47:04.398 20:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:47:04.399 20:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:47:04.657 20:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:47:04.658 20:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:47:04.916 20:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:47:05.173 [2024-10-01 20:46:00.331325] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:47:05.173 20:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:47:05.431 20:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:47:05.689 20:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid=e1b29161-2f6e-4e88-9903-c5274a848703 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:47:05.689 20:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:47:05.689 20:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:47:05.689 20:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:47:05.689 20:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:47:05.689 20:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:47:05.689 20:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:47:08.217 20:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:47:08.217 20:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:47:08.217 20:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:47:08.217 20:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:47:08.217 20:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:47:08.217 20:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:47:08.217 20:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:47:08.217 [global] 00:47:08.217 thread=1 00:47:08.217 invalidate=1 00:47:08.217 rw=write 00:47:08.217 time_based=1 00:47:08.217 runtime=1 00:47:08.217 ioengine=libaio 00:47:08.217 direct=1 00:47:08.217 bs=4096 00:47:08.217 iodepth=1 00:47:08.217 norandommap=0 00:47:08.217 numjobs=1 00:47:08.217 00:47:08.217 verify_dump=1 00:47:08.217 verify_backlog=512 00:47:08.217 verify_state_save=0 00:47:08.217 do_verify=1 00:47:08.217 verify=crc32c-intel 00:47:08.217 [job0] 00:47:08.217 filename=/dev/nvme0n1 00:47:08.217 [job1] 00:47:08.217 filename=/dev/nvme0n2 00:47:08.217 [job2] 00:47:08.217 filename=/dev/nvme0n3 00:47:08.217 [job3] 00:47:08.217 filename=/dev/nvme0n4 00:47:08.217 Could not set queue depth (nvme0n1) 00:47:08.217 Could not set queue depth (nvme0n2) 00:47:08.217 Could not set queue depth (nvme0n3) 00:47:08.217 Could not set queue depth (nvme0n4) 00:47:08.217 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:47:08.217 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:47:08.217 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:47:08.217 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:47:08.217 fio-3.35 00:47:08.217 Starting 4 threads 00:47:09.153 00:47:09.153 job0: (groupid=0, jobs=1): err= 0: pid=105535: Tue Oct 1 20:46:04 2024 00:47:09.153 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:47:09.153 slat (nsec): min=11002, max=52610, avg=16584.28, stdev=4355.02 00:47:09.153 clat (usec): min=237, max=2157, avg=332.81, stdev=71.53 00:47:09.153 lat (usec): min=259, max=2176, avg=349.39, stdev=72.00 00:47:09.153 clat percentiles (usec): 00:47:09.153 | 1.00th=[ 265], 5.00th=[ 281], 10.00th=[ 289], 20.00th=[ 297], 00:47:09.153 | 30.00th=[ 306], 40.00th=[ 310], 50.00th=[ 318], 60.00th=[ 326], 00:47:09.153 | 70.00th=[ 334], 80.00th=[ 347], 90.00th=[ 400], 95.00th=[ 461], 00:47:09.153 | 99.00th=[ 523], 99.50th=[ 570], 99.90th=[ 734], 99.95th=[ 2147], 00:47:09.153 | 99.99th=[ 2147] 00:47:09.153 write: IOPS=1631, BW=6525KiB/s (6682kB/s)(6532KiB/1001msec); 0 zone resets 00:47:09.153 slat (usec): min=18, max=608, avg=26.83, stdev=16.62 00:47:09.153 clat (usec): min=146, max=421, avg=253.13, stdev=30.04 00:47:09.153 lat (usec): min=225, max=880, avg=279.96, stdev=32.98 00:47:09.153 clat percentiles (usec): 00:47:09.153 | 1.00th=[ 206], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 231], 00:47:09.153 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 255], 00:47:09.153 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 306], 00:47:09.153 | 99.00th=[ 371], 99.50th=[ 392], 99.90th=[ 420], 99.95th=[ 420], 00:47:09.153 | 99.99th=[ 420] 00:47:09.153 bw ( KiB/s): min= 8192, max= 8192, per=24.95%, avg=8192.00, stdev= 0.00, samples=1 00:47:09.153 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:47:09.153 lat (usec) : 250=26.66%, 500=72.29%, 750=1.01% 00:47:09.153 lat (msec) : 4=0.03% 00:47:09.153 cpu : usr=1.50%, sys=5.10%, ctx=3171, majf=0, minf=9 00:47:09.153 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:09.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:09.153 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:09.153 issued rwts: total=1536,1633,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:09.153 latency : target=0, window=0, percentile=100.00%, depth=1 00:47:09.153 job1: (groupid=0, jobs=1): err= 0: pid=105536: Tue Oct 1 20:46:04 2024 00:47:09.153 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:47:09.153 slat (nsec): min=12098, max=45363, avg=15831.36, stdev=3571.77 00:47:09.153 clat (usec): min=157, max=347, avg=194.57, stdev=15.20 00:47:09.153 lat (usec): min=171, max=363, avg=210.40, stdev=15.74 00:47:09.153 clat percentiles (usec): 00:47:09.153 | 1.00th=[ 165], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 182], 00:47:09.153 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 198], 00:47:09.153 | 70.00th=[ 202], 80.00th=[ 206], 90.00th=[ 215], 95.00th=[ 221], 00:47:09.153 | 99.00th=[ 233], 99.50th=[ 239], 99.90th=[ 326], 99.95th=[ 334], 00:47:09.153 | 99.99th=[ 347] 00:47:09.153 write: IOPS=2650, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1001msec); 0 zone resets 00:47:09.153 slat (usec): min=17, max=148, avg=25.81, stdev= 8.43 00:47:09.153 clat (usec): min=103, max=523, avg=144.52, stdev=26.16 00:47:09.153 lat (usec): min=130, max=548, avg=170.33, stdev=31.96 00:47:09.153 clat percentiles (usec): 00:47:09.153 | 1.00th=[ 117], 5.00th=[ 121], 10.00th=[ 124], 20.00th=[ 127], 00:47:09.153 | 30.00th=[ 131], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 141], 00:47:09.153 | 70.00th=[ 147], 80.00th=[ 155], 90.00th=[ 182], 95.00th=[ 194], 00:47:09.153 | 99.00th=[ 217], 99.50th=[ 233], 99.90th=[ 441], 99.95th=[ 461], 00:47:09.153 | 99.99th=[ 523] 00:47:09.153 bw ( KiB/s): min=12288, max=12288, per=37.42%, avg=12288.00, stdev= 0.00, samples=1 00:47:09.153 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:47:09.153 lat (usec) : 250=99.75%, 500=0.23%, 750=0.02% 00:47:09.153 cpu : usr=2.00%, sys=8.20%, ctx=5214, majf=0, minf=10 00:47:09.153 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:09.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:09.153 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:09.153 issued rwts: total=2560,2653,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:09.153 latency : target=0, window=0, percentile=100.00%, depth=1 00:47:09.153 job2: (groupid=0, jobs=1): err= 0: pid=105537: Tue Oct 1 20:46:04 2024 00:47:09.153 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:47:09.153 slat (nsec): min=12787, max=61130, avg=17082.80, stdev=4831.86 00:47:09.153 clat (usec): min=194, max=404, avg=241.32, stdev=24.13 00:47:09.153 lat (usec): min=208, max=435, avg=258.40, stdev=24.99 00:47:09.153 clat percentiles (usec): 00:47:09.153 | 1.00th=[ 204], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 225], 00:47:09.153 | 30.00th=[ 229], 40.00th=[ 233], 50.00th=[ 237], 60.00th=[ 243], 00:47:09.153 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 269], 95.00th=[ 281], 00:47:09.153 | 99.00th=[ 338], 99.50th=[ 359], 99.90th=[ 396], 99.95th=[ 400], 00:47:09.153 | 99.99th=[ 404] 00:47:09.153 write: IOPS=2294, BW=9179KiB/s (9399kB/s)(9188KiB/1001msec); 0 zone resets 00:47:09.153 slat (usec): min=18, max=174, avg=25.13, stdev= 8.40 00:47:09.153 clat (usec): min=131, max=2167, avg=176.55, stdev=49.32 00:47:09.153 lat (usec): min=151, max=2191, avg=201.67, stdev=50.78 00:47:09.153 clat percentiles (usec): 00:47:09.153 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:47:09.153 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:47:09.153 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 206], 95.00th=[ 225], 00:47:09.153 | 99.00th=[ 262], 99.50th=[ 277], 99.90th=[ 529], 99.95th=[ 578], 00:47:09.153 | 99.99th=[ 2180] 00:47:09.153 bw ( KiB/s): min= 8544, max= 8544, per=26.02%, avg=8544.00, stdev= 0.00, samples=1 00:47:09.153 iops : min= 2136, max= 2136, avg=2136.00, stdev= 0.00, samples=1 00:47:09.153 lat (usec) : 250=86.42%, 500=13.51%, 750=0.05% 00:47:09.153 lat (msec) : 4=0.02% 00:47:09.153 cpu : usr=1.20%, sys=7.40%, ctx=4347, majf=0, minf=13 00:47:09.153 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:09.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:09.153 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:09.153 issued rwts: total=2048,2297,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:09.153 latency : target=0, window=0, percentile=100.00%, depth=1 00:47:09.153 job3: (groupid=0, jobs=1): err= 0: pid=105538: Tue Oct 1 20:46:04 2024 00:47:09.153 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:47:09.153 slat (nsec): min=12481, max=73907, avg=16808.88, stdev=4366.04 00:47:09.153 clat (usec): min=191, max=2083, avg=332.47, stdev=69.22 00:47:09.153 lat (usec): min=211, max=2100, avg=349.28, stdev=69.49 00:47:09.153 clat percentiles (usec): 00:47:09.153 | 1.00th=[ 269], 5.00th=[ 281], 10.00th=[ 289], 20.00th=[ 297], 00:47:09.153 | 30.00th=[ 306], 40.00th=[ 310], 50.00th=[ 318], 60.00th=[ 326], 00:47:09.153 | 70.00th=[ 334], 80.00th=[ 347], 90.00th=[ 392], 95.00th=[ 465], 00:47:09.153 | 99.00th=[ 515], 99.50th=[ 537], 99.90th=[ 685], 99.95th=[ 2089], 00:47:09.153 | 99.99th=[ 2089] 00:47:09.153 write: IOPS=1633, BW=6533KiB/s (6690kB/s)(6540KiB/1001msec); 0 zone resets 00:47:09.153 slat (nsec): min=17147, max=84598, avg=26769.98, stdev=8548.02 00:47:09.153 clat (usec): min=153, max=443, avg=252.92, stdev=30.72 00:47:09.153 lat (usec): min=175, max=473, avg=279.69, stdev=30.16 00:47:09.153 clat percentiles (usec): 00:47:09.153 | 1.00th=[ 200], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 231], 00:47:09.153 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 255], 00:47:09.153 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 302], 00:47:09.153 | 99.00th=[ 379], 99.50th=[ 388], 99.90th=[ 420], 99.95th=[ 445], 00:47:09.153 | 99.99th=[ 445] 00:47:09.153 bw ( KiB/s): min= 8192, max= 8192, per=24.95%, avg=8192.00, stdev= 0.00, samples=1 00:47:09.153 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:47:09.153 lat (usec) : 250=26.58%, 500=72.37%, 750=1.01% 00:47:09.153 lat (msec) : 4=0.03% 00:47:09.154 cpu : usr=1.30%, sys=5.30%, ctx=3177, majf=0, minf=7 00:47:09.154 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:09.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:09.154 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:09.154 issued rwts: total=1536,1635,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:09.154 latency : target=0, window=0, percentile=100.00%, depth=1 00:47:09.154 00:47:09.154 Run status group 0 (all jobs): 00:47:09.154 READ: bw=30.0MiB/s (31.4MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=30.0MiB (31.5MB), run=1001-1001msec 00:47:09.154 WRITE: bw=32.1MiB/s (33.6MB/s), 6525KiB/s-10.4MiB/s (6682kB/s-10.9MB/s), io=32.1MiB (33.7MB), run=1001-1001msec 00:47:09.154 00:47:09.154 Disk stats (read/write): 00:47:09.154 nvme0n1: ios=1217/1536, merge=0/0, ticks=420/398, in_queue=818, util=86.47% 00:47:09.154 nvme0n2: ios=2063/2372, merge=0/0, ticks=413/372, in_queue=785, util=86.76% 00:47:09.154 nvme0n3: ios=1633/2048, merge=0/0, ticks=399/381, in_queue=780, util=88.81% 00:47:09.154 nvme0n4: ios=1168/1536, merge=0/0, ticks=405/405, in_queue=810, util=89.65% 00:47:09.154 20:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:47:09.154 [global] 00:47:09.154 thread=1 00:47:09.154 invalidate=1 00:47:09.154 rw=randwrite 00:47:09.154 time_based=1 00:47:09.154 runtime=1 00:47:09.154 ioengine=libaio 00:47:09.154 direct=1 00:47:09.154 bs=4096 00:47:09.154 iodepth=1 00:47:09.154 norandommap=0 00:47:09.154 numjobs=1 00:47:09.154 00:47:09.154 verify_dump=1 00:47:09.154 verify_backlog=512 00:47:09.154 verify_state_save=0 00:47:09.154 do_verify=1 00:47:09.154 verify=crc32c-intel 00:47:09.154 [job0] 00:47:09.154 filename=/dev/nvme0n1 00:47:09.154 [job1] 00:47:09.154 filename=/dev/nvme0n2 00:47:09.154 [job2] 00:47:09.154 filename=/dev/nvme0n3 00:47:09.154 [job3] 00:47:09.154 filename=/dev/nvme0n4 00:47:09.412 Could not set queue depth (nvme0n1) 00:47:09.412 Could not set queue depth (nvme0n2) 00:47:09.412 Could not set queue depth (nvme0n3) 00:47:09.412 Could not set queue depth (nvme0n4) 00:47:09.412 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:47:09.412 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:47:09.412 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:47:09.412 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:47:09.412 fio-3.35 00:47:09.412 Starting 4 threads 00:47:10.790 00:47:10.790 job0: (groupid=0, jobs=1): err= 0: pid=105591: Tue Oct 1 20:46:05 2024 00:47:10.790 read: IOPS=2383, BW=9534KiB/s (9763kB/s)(9544KiB/1001msec) 00:47:10.790 slat (nsec): min=12196, max=71406, avg=17814.13, stdev=6078.14 00:47:10.790 clat (usec): min=162, max=524, avg=213.05, stdev=36.71 00:47:10.790 lat (usec): min=175, max=543, avg=230.87, stdev=38.68 00:47:10.790 clat percentiles (usec): 00:47:10.790 | 1.00th=[ 174], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 188], 00:47:10.790 | 30.00th=[ 194], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 212], 00:47:10.790 | 70.00th=[ 219], 80.00th=[ 227], 90.00th=[ 239], 95.00th=[ 273], 00:47:10.790 | 99.00th=[ 359], 99.50th=[ 433], 99.90th=[ 453], 99.95th=[ 482], 00:47:10.790 | 99.99th=[ 529] 00:47:10.790 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:47:10.790 slat (nsec): min=17983, max=80936, avg=25005.12, stdev=6899.06 00:47:10.790 clat (usec): min=114, max=249, avg=146.68, stdev=18.05 00:47:10.790 lat (usec): min=133, max=307, avg=171.69, stdev=19.79 00:47:10.790 clat percentiles (usec): 00:47:10.790 | 1.00th=[ 120], 5.00th=[ 123], 10.00th=[ 127], 20.00th=[ 131], 00:47:10.790 | 30.00th=[ 135], 40.00th=[ 141], 50.00th=[ 145], 60.00th=[ 149], 00:47:10.790 | 70.00th=[ 155], 80.00th=[ 161], 90.00th=[ 172], 95.00th=[ 180], 00:47:10.790 | 99.00th=[ 202], 99.50th=[ 212], 99.90th=[ 243], 99.95th=[ 249], 00:47:10.790 | 99.99th=[ 251] 00:47:10.790 bw ( KiB/s): min=12256, max=12256, per=42.79%, avg=12256.00, stdev= 0.00, samples=1 00:47:10.790 iops : min= 3064, max= 3064, avg=3064.00, stdev= 0.00, samples=1 00:47:10.790 lat (usec) : 250=96.64%, 500=3.34%, 750=0.02% 00:47:10.790 cpu : usr=2.00%, sys=8.00%, ctx=4956, majf=0, minf=17 00:47:10.790 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:10.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:10.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:10.790 issued rwts: total=2386,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:10.790 latency : target=0, window=0, percentile=100.00%, depth=1 00:47:10.790 job1: (groupid=0, jobs=1): err= 0: pid=105592: Tue Oct 1 20:46:05 2024 00:47:10.790 read: IOPS=1323, BW=5295KiB/s (5422kB/s)(5300KiB/1001msec) 00:47:10.790 slat (usec): min=7, max=139, avg=18.40, stdev=10.81 00:47:10.790 clat (usec): min=233, max=3357, avg=364.29, stdev=97.46 00:47:10.790 lat (usec): min=246, max=3390, avg=382.68, stdev=98.32 00:47:10.790 clat percentiles (usec): 00:47:10.790 | 1.00th=[ 253], 5.00th=[ 281], 10.00th=[ 306], 20.00th=[ 322], 00:47:10.790 | 30.00th=[ 338], 40.00th=[ 351], 50.00th=[ 359], 60.00th=[ 367], 00:47:10.790 | 70.00th=[ 379], 80.00th=[ 392], 90.00th=[ 424], 95.00th=[ 465], 00:47:10.790 | 99.00th=[ 529], 99.50th=[ 553], 99.90th=[ 619], 99.95th=[ 3359], 00:47:10.790 | 99.99th=[ 3359] 00:47:10.790 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:47:10.790 slat (nsec): min=13742, max=92373, avg=23678.85, stdev=10390.41 00:47:10.790 clat (usec): min=176, max=432, avg=293.71, stdev=39.39 00:47:10.790 lat (usec): min=218, max=476, avg=317.39, stdev=36.68 00:47:10.790 clat percentiles (usec): 00:47:10.790 | 1.00th=[ 200], 5.00th=[ 229], 10.00th=[ 243], 20.00th=[ 260], 00:47:10.790 | 30.00th=[ 273], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 306], 00:47:10.790 | 70.00th=[ 314], 80.00th=[ 326], 90.00th=[ 347], 95.00th=[ 359], 00:47:10.790 | 99.00th=[ 375], 99.50th=[ 388], 99.90th=[ 424], 99.95th=[ 433], 00:47:10.790 | 99.99th=[ 433] 00:47:10.790 bw ( KiB/s): min= 7608, max= 7608, per=26.56%, avg=7608.00, stdev= 0.00, samples=1 00:47:10.790 iops : min= 1902, max= 1902, avg=1902.00, stdev= 0.00, samples=1 00:47:10.790 lat (usec) : 250=8.11%, 500=90.98%, 750=0.87% 00:47:10.790 lat (msec) : 4=0.03% 00:47:10.790 cpu : usr=1.50%, sys=4.70%, ctx=2987, majf=0, minf=7 00:47:10.790 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:10.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:10.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:10.790 issued rwts: total=1325,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:10.790 latency : target=0, window=0, percentile=100.00%, depth=1 00:47:10.790 job2: (groupid=0, jobs=1): err= 0: pid=105593: Tue Oct 1 20:46:05 2024 00:47:10.790 read: IOPS=1406, BW=5626KiB/s (5761kB/s)(5632KiB/1001msec) 00:47:10.790 slat (usec): min=8, max=152, avg=18.78, stdev= 8.87 00:47:10.790 clat (usec): min=203, max=3483, avg=361.12, stdev=99.03 00:47:10.790 lat (usec): min=223, max=3499, avg=379.90, stdev=98.93 00:47:10.790 clat percentiles (usec): 00:47:10.790 | 1.00th=[ 231], 5.00th=[ 289], 10.00th=[ 306], 20.00th=[ 322], 00:47:10.790 | 30.00th=[ 334], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 367], 00:47:10.790 | 70.00th=[ 375], 80.00th=[ 392], 90.00th=[ 416], 95.00th=[ 453], 00:47:10.790 | 99.00th=[ 510], 99.50th=[ 537], 99.90th=[ 1090], 99.95th=[ 3490], 00:47:10.790 | 99.99th=[ 3490] 00:47:10.790 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:47:10.790 slat (usec): min=10, max=102, avg=25.30, stdev= 9.07 00:47:10.790 clat (usec): min=134, max=1072, avg=273.61, stdev=66.92 00:47:10.790 lat (usec): min=161, max=1096, avg=298.91, stdev=65.44 00:47:10.791 clat percentiles (usec): 00:47:10.791 | 1.00th=[ 141], 5.00th=[ 151], 10.00th=[ 161], 20.00th=[ 202], 00:47:10.791 | 30.00th=[ 265], 40.00th=[ 277], 50.00th=[ 289], 60.00th=[ 297], 00:47:10.791 | 70.00th=[ 310], 80.00th=[ 322], 90.00th=[ 338], 95.00th=[ 355], 00:47:10.791 | 99.00th=[ 388], 99.50th=[ 400], 99.90th=[ 502], 99.95th=[ 1074], 00:47:10.791 | 99.99th=[ 1074] 00:47:10.791 bw ( KiB/s): min= 8192, max= 8192, per=28.60%, avg=8192.00, stdev= 0.00, samples=1 00:47:10.791 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:47:10.791 lat (usec) : 250=12.57%, 500=86.62%, 750=0.71% 00:47:10.791 lat (msec) : 2=0.07%, 4=0.03% 00:47:10.791 cpu : usr=1.00%, sys=5.30%, ctx=3121, majf=0, minf=13 00:47:10.791 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:10.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:10.791 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:10.791 issued rwts: total=1408,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:10.791 latency : target=0, window=0, percentile=100.00%, depth=1 00:47:10.791 job3: (groupid=0, jobs=1): err= 0: pid=105594: Tue Oct 1 20:46:05 2024 00:47:10.791 read: IOPS=1324, BW=5299KiB/s (5426kB/s)(5304KiB/1001msec) 00:47:10.791 slat (usec): min=7, max=190, avg=17.84, stdev=11.32 00:47:10.791 clat (usec): min=174, max=3274, avg=364.61, stdev=95.39 00:47:10.791 lat (usec): min=216, max=3294, avg=382.46, stdev=95.40 00:47:10.791 clat percentiles (usec): 00:47:10.791 | 1.00th=[ 255], 5.00th=[ 285], 10.00th=[ 302], 20.00th=[ 322], 00:47:10.791 | 30.00th=[ 338], 40.00th=[ 351], 50.00th=[ 363], 60.00th=[ 371], 00:47:10.791 | 70.00th=[ 383], 80.00th=[ 396], 90.00th=[ 424], 95.00th=[ 457], 00:47:10.791 | 99.00th=[ 523], 99.50th=[ 537], 99.90th=[ 676], 99.95th=[ 3261], 00:47:10.791 | 99.99th=[ 3261] 00:47:10.791 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:47:10.791 slat (nsec): min=13806, max=91190, avg=26218.64, stdev=10167.54 00:47:10.791 clat (usec): min=156, max=431, avg=291.13, stdev=37.37 00:47:10.791 lat (usec): min=202, max=456, avg=317.35, stdev=35.81 00:47:10.791 clat percentiles (usec): 00:47:10.791 | 1.00th=[ 204], 5.00th=[ 229], 10.00th=[ 243], 20.00th=[ 260], 00:47:10.791 | 30.00th=[ 273], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 302], 00:47:10.791 | 70.00th=[ 314], 80.00th=[ 322], 90.00th=[ 343], 95.00th=[ 351], 00:47:10.791 | 99.00th=[ 371], 99.50th=[ 379], 99.90th=[ 408], 99.95th=[ 433], 00:47:10.791 | 99.99th=[ 433] 00:47:10.791 bw ( KiB/s): min= 7608, max= 7608, per=26.56%, avg=7608.00, stdev= 0.00, samples=1 00:47:10.791 iops : min= 1902, max= 1902, avg=1902.00, stdev= 0.00, samples=1 00:47:10.791 lat (usec) : 250=7.83%, 500=91.51%, 750=0.63% 00:47:10.791 lat (msec) : 4=0.03% 00:47:10.791 cpu : usr=0.90%, sys=5.30%, ctx=3094, majf=0, minf=9 00:47:10.791 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:10.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:10.791 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:10.791 issued rwts: total=1326,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:10.791 latency : target=0, window=0, percentile=100.00%, depth=1 00:47:10.791 00:47:10.791 Run status group 0 (all jobs): 00:47:10.791 READ: bw=25.2MiB/s (26.4MB/s), 5295KiB/s-9534KiB/s (5422kB/s-9763kB/s), io=25.2MiB (26.4MB), run=1001-1001msec 00:47:10.791 WRITE: bw=28.0MiB/s (29.3MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:47:10.791 00:47:10.791 Disk stats (read/write): 00:47:10.791 nvme0n1: ios=2098/2355, merge=0/0, ticks=457/363, in_queue=820, util=88.58% 00:47:10.791 nvme0n2: ios=1072/1475, merge=0/0, ticks=425/439, in_queue=864, util=89.49% 00:47:10.791 nvme0n3: ios=1085/1536, merge=0/0, ticks=403/434, in_queue=837, util=89.52% 00:47:10.791 nvme0n4: ios=1024/1476, merge=0/0, ticks=381/438, in_queue=819, util=89.77% 00:47:10.791 20:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:47:10.791 [global] 00:47:10.791 thread=1 00:47:10.791 invalidate=1 00:47:10.791 rw=write 00:47:10.791 time_based=1 00:47:10.791 runtime=1 00:47:10.791 ioengine=libaio 00:47:10.791 direct=1 00:47:10.791 bs=4096 00:47:10.791 iodepth=128 00:47:10.791 norandommap=0 00:47:10.791 numjobs=1 00:47:10.791 00:47:10.791 verify_dump=1 00:47:10.791 verify_backlog=512 00:47:10.791 verify_state_save=0 00:47:10.791 do_verify=1 00:47:10.791 verify=crc32c-intel 00:47:10.791 [job0] 00:47:10.791 filename=/dev/nvme0n1 00:47:10.791 [job1] 00:47:10.791 filename=/dev/nvme0n2 00:47:10.791 [job2] 00:47:10.791 filename=/dev/nvme0n3 00:47:10.791 [job3] 00:47:10.791 filename=/dev/nvme0n4 00:47:10.791 Could not set queue depth (nvme0n1) 00:47:10.791 Could not set queue depth (nvme0n2) 00:47:10.791 Could not set queue depth (nvme0n3) 00:47:10.791 Could not set queue depth (nvme0n4) 00:47:10.791 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:47:10.791 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:47:10.791 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:47:10.791 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:47:10.791 fio-3.35 00:47:10.791 Starting 4 threads 00:47:12.203 00:47:12.203 job0: (groupid=0, jobs=1): err= 0: pid=105653: Tue Oct 1 20:46:07 2024 00:47:12.203 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:47:12.203 slat (usec): min=5, max=7854, avg=200.52, stdev=798.90 00:47:12.203 clat (usec): min=14928, max=35390, avg=25805.39, stdev=3528.08 00:47:12.203 lat (usec): min=14960, max=35416, avg=26005.91, stdev=3499.66 00:47:12.203 clat percentiles (usec): 00:47:12.203 | 1.00th=[17695], 5.00th=[20317], 10.00th=[21627], 20.00th=[22938], 00:47:12.203 | 30.00th=[23987], 40.00th=[24511], 50.00th=[25560], 60.00th=[26608], 00:47:12.203 | 70.00th=[27395], 80.00th=[28443], 90.00th=[30802], 95.00th=[32375], 00:47:12.203 | 99.00th=[33424], 99.50th=[33424], 99.90th=[35390], 99.95th=[35390], 00:47:12.203 | 99.99th=[35390] 00:47:12.203 write: IOPS=2714, BW=10.6MiB/s (11.1MB/s)(10.7MiB/1005msec); 0 zone resets 00:47:12.203 slat (usec): min=9, max=6188, avg=169.51, stdev=684.31 00:47:12.203 clat (usec): min=3990, max=39619, avg=22237.59, stdev=6587.57 00:47:12.203 lat (usec): min=7064, max=39668, avg=22407.11, stdev=6599.40 00:47:12.203 clat percentiles (usec): 00:47:12.203 | 1.00th=[12125], 5.00th=[15795], 10.00th=[16450], 20.00th=[16909], 00:47:12.203 | 30.00th=[17957], 40.00th=[18482], 50.00th=[19530], 60.00th=[22152], 00:47:12.203 | 70.00th=[22938], 80.00th=[30278], 90.00th=[34341], 95.00th=[34866], 00:47:12.203 | 99.00th=[36963], 99.50th=[38536], 99.90th=[39584], 99.95th=[39584], 00:47:12.203 | 99.99th=[39584] 00:47:12.203 bw ( KiB/s): min= 9432, max=11376, per=16.72%, avg=10404.00, stdev=1374.62, samples=2 00:47:12.203 iops : min= 2358, max= 2844, avg=2601.00, stdev=343.65, samples=2 00:47:12.203 lat (msec) : 4=0.02%, 10=0.30%, 20=29.05%, 50=70.63% 00:47:12.203 cpu : usr=2.09%, sys=9.26%, ctx=319, majf=0, minf=11 00:47:12.203 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:47:12.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:12.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:47:12.203 issued rwts: total=2560,2728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:12.203 latency : target=0, window=0, percentile=100.00%, depth=128 00:47:12.203 job1: (groupid=0, jobs=1): err= 0: pid=105654: Tue Oct 1 20:46:07 2024 00:47:12.203 read: IOPS=2964, BW=11.6MiB/s (12.1MB/s)(11.6MiB/1004msec) 00:47:12.203 slat (usec): min=5, max=6339, avg=178.87, stdev=731.13 00:47:12.203 clat (usec): min=3467, max=33850, avg=22717.37, stdev=5354.41 00:47:12.203 lat (usec): min=3493, max=33862, avg=22896.25, stdev=5361.43 00:47:12.203 clat percentiles (usec): 00:47:12.203 | 1.00th=[ 7635], 5.00th=[15795], 10.00th=[17171], 20.00th=[17433], 00:47:12.203 | 30.00th=[19006], 40.00th=[21365], 50.00th=[23462], 60.00th=[24511], 00:47:12.203 | 70.00th=[26346], 80.00th=[27657], 90.00th=[28967], 95.00th=[30802], 00:47:12.203 | 99.00th=[33424], 99.50th=[33424], 99.90th=[33817], 99.95th=[33817], 00:47:12.203 | 99.99th=[33817] 00:47:12.203 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:47:12.203 slat (usec): min=14, max=5216, avg=143.49, stdev=658.47 00:47:12.203 clat (usec): min=11823, max=26588, avg=19123.55, stdev=2711.63 00:47:12.203 lat (usec): min=14181, max=26606, avg=19267.04, stdev=2652.32 00:47:12.203 clat percentiles (usec): 00:47:12.203 | 1.00th=[14091], 5.00th=[15795], 10.00th=[16057], 20.00th=[16450], 00:47:12.203 | 30.00th=[16909], 40.00th=[17695], 50.00th=[19006], 60.00th=[19792], 00:47:12.203 | 70.00th=[20841], 80.00th=[21890], 90.00th=[22676], 95.00th=[23462], 00:47:12.203 | 99.00th=[26346], 99.50th=[26608], 99.90th=[26608], 99.95th=[26608], 00:47:12.203 | 99.99th=[26608] 00:47:12.203 bw ( KiB/s): min=12288, max=12312, per=19.76%, avg=12300.00, stdev=16.97, samples=2 00:47:12.203 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:47:12.203 lat (msec) : 4=0.35%, 10=0.53%, 20=48.45%, 50=50.68% 00:47:12.204 cpu : usr=3.29%, sys=9.27%, ctx=263, majf=0, minf=13 00:47:12.204 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:47:12.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:12.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:47:12.204 issued rwts: total=2976,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:12.204 latency : target=0, window=0, percentile=100.00%, depth=128 00:47:12.204 job2: (groupid=0, jobs=1): err= 0: pid=105655: Tue Oct 1 20:46:07 2024 00:47:12.204 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:47:12.204 slat (usec): min=4, max=3269, avg=103.63, stdev=422.71 00:47:12.204 clat (usec): min=10244, max=16542, avg=13467.51, stdev=988.06 00:47:12.204 lat (usec): min=10893, max=16565, avg=13571.14, stdev=943.25 00:47:12.204 clat percentiles (usec): 00:47:12.204 | 1.00th=[11076], 5.00th=[11600], 10.00th=[11994], 20.00th=[12649], 00:47:12.204 | 30.00th=[13042], 40.00th=[13435], 50.00th=[13698], 60.00th=[13829], 00:47:12.204 | 70.00th=[13960], 80.00th=[14222], 90.00th=[14615], 95.00th=[14877], 00:47:12.204 | 99.00th=[15795], 99.50th=[15795], 99.90th=[16581], 99.95th=[16581], 00:47:12.204 | 99.99th=[16581] 00:47:12.204 write: IOPS=4881, BW=19.1MiB/s (20.0MB/s)(19.1MiB/1003msec); 0 zone resets 00:47:12.204 slat (usec): min=9, max=3796, avg=99.75, stdev=396.11 00:47:12.204 clat (usec): min=2467, max=17082, avg=13191.37, stdev=1630.46 00:47:12.204 lat (usec): min=3097, max=17106, avg=13291.12, stdev=1626.08 00:47:12.204 clat percentiles (usec): 00:47:12.204 | 1.00th=[ 8029], 5.00th=[11076], 10.00th=[11207], 20.00th=[11731], 00:47:12.204 | 30.00th=[12256], 40.00th=[13042], 50.00th=[13435], 60.00th=[13829], 00:47:12.204 | 70.00th=[14091], 80.00th=[14484], 90.00th=[15139], 95.00th=[15401], 00:47:12.204 | 99.00th=[16319], 99.50th=[16909], 99.90th=[17171], 99.95th=[17171], 00:47:12.204 | 99.99th=[17171] 00:47:12.204 bw ( KiB/s): min=18328, max=19855, per=30.67%, avg=19091.50, stdev=1079.75, samples=2 00:47:12.204 iops : min= 4582, max= 4963, avg=4772.50, stdev=269.41, samples=2 00:47:12.204 lat (msec) : 4=0.12%, 10=0.57%, 20=99.32% 00:47:12.204 cpu : usr=3.99%, sys=12.97%, ctx=659, majf=0, minf=13 00:47:12.204 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:47:12.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:12.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:47:12.204 issued rwts: total=4608,4896,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:12.204 latency : target=0, window=0, percentile=100.00%, depth=128 00:47:12.204 job3: (groupid=0, jobs=1): err= 0: pid=105656: Tue Oct 1 20:46:07 2024 00:47:12.204 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:47:12.204 slat (usec): min=7, max=3299, avg=103.14, stdev=399.48 00:47:12.204 clat (usec): min=10191, max=16733, avg=13543.78, stdev=784.98 00:47:12.204 lat (usec): min=10697, max=18803, avg=13646.92, stdev=717.05 00:47:12.204 clat percentiles (usec): 00:47:12.204 | 1.00th=[11076], 5.00th=[12125], 10.00th=[12649], 20.00th=[13042], 00:47:12.204 | 30.00th=[13304], 40.00th=[13435], 50.00th=[13566], 60.00th=[13698], 00:47:12.204 | 70.00th=[13829], 80.00th=[14091], 90.00th=[14353], 95.00th=[14746], 00:47:12.204 | 99.00th=[15664], 99.50th=[15664], 99.90th=[16581], 99.95th=[16712], 00:47:12.204 | 99.99th=[16712] 00:47:12.204 write: IOPS=4931, BW=19.3MiB/s (20.2MB/s)(19.3MiB/1002msec); 0 zone resets 00:47:12.204 slat (usec): min=11, max=3604, avg=98.46, stdev=413.21 00:47:12.204 clat (usec): min=306, max=16104, avg=12979.96, stdev=1670.51 00:47:12.204 lat (usec): min=2964, max=16126, avg=13078.41, stdev=1666.81 00:47:12.204 clat percentiles (usec): 00:47:12.204 | 1.00th=[ 7111], 5.00th=[11076], 10.00th=[11338], 20.00th=[11600], 00:47:12.204 | 30.00th=[11863], 40.00th=[12387], 50.00th=[13042], 60.00th=[13566], 00:47:12.204 | 70.00th=[14091], 80.00th=[14484], 90.00th=[14877], 95.00th=[15139], 00:47:12.204 | 99.00th=[15795], 99.50th=[15926], 99.90th=[16057], 99.95th=[16057], 00:47:12.204 | 99.99th=[16057] 00:47:12.204 bw ( KiB/s): min=18336, max=20208, per=30.97%, avg=19272.00, stdev=1323.70, samples=2 00:47:12.204 iops : min= 4584, max= 5052, avg=4818.00, stdev=330.93, samples=2 00:47:12.204 lat (usec) : 500=0.01% 00:47:12.204 lat (msec) : 4=0.38%, 10=0.40%, 20=99.21% 00:47:12.204 cpu : usr=4.30%, sys=14.59%, ctx=636, majf=0, minf=16 00:47:12.204 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:47:12.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:12.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:47:12.204 issued rwts: total=4608,4941,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:12.204 latency : target=0, window=0, percentile=100.00%, depth=128 00:47:12.204 00:47:12.204 Run status group 0 (all jobs): 00:47:12.204 READ: bw=57.3MiB/s (60.1MB/s), 9.95MiB/s-18.0MiB/s (10.4MB/s-18.8MB/s), io=57.6MiB (60.4MB), run=1002-1005msec 00:47:12.204 WRITE: bw=60.8MiB/s (63.7MB/s), 10.6MiB/s-19.3MiB/s (11.1MB/s-20.2MB/s), io=61.1MiB (64.0MB), run=1002-1005msec 00:47:12.204 00:47:12.204 Disk stats (read/write): 00:47:12.204 nvme0n1: ios=2098/2560, merge=0/0, ticks=12827/12641, in_queue=25468, util=88.48% 00:47:12.204 nvme0n2: ios=2536/2560, merge=0/0, ticks=14508/10881, in_queue=25389, util=88.88% 00:47:12.204 nvme0n3: ios=4027/4096, merge=0/0, ticks=12933/11910, in_queue=24843, util=89.13% 00:47:12.204 nvme0n4: ios=4045/4096, merge=0/0, ticks=12831/11738, in_queue=24569, util=89.69% 00:47:12.204 20:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:47:12.204 [global] 00:47:12.204 thread=1 00:47:12.204 invalidate=1 00:47:12.204 rw=randwrite 00:47:12.204 time_based=1 00:47:12.204 runtime=1 00:47:12.204 ioengine=libaio 00:47:12.204 direct=1 00:47:12.204 bs=4096 00:47:12.204 iodepth=128 00:47:12.204 norandommap=0 00:47:12.204 numjobs=1 00:47:12.204 00:47:12.204 verify_dump=1 00:47:12.204 verify_backlog=512 00:47:12.204 verify_state_save=0 00:47:12.204 do_verify=1 00:47:12.204 verify=crc32c-intel 00:47:12.204 [job0] 00:47:12.204 filename=/dev/nvme0n1 00:47:12.204 [job1] 00:47:12.204 filename=/dev/nvme0n2 00:47:12.204 [job2] 00:47:12.204 filename=/dev/nvme0n3 00:47:12.204 [job3] 00:47:12.204 filename=/dev/nvme0n4 00:47:12.204 Could not set queue depth (nvme0n1) 00:47:12.204 Could not set queue depth (nvme0n2) 00:47:12.204 Could not set queue depth (nvme0n3) 00:47:12.204 Could not set queue depth (nvme0n4) 00:47:12.204 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:47:12.204 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:47:12.204 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:47:12.204 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:47:12.204 fio-3.35 00:47:12.204 Starting 4 threads 00:47:13.578 00:47:13.578 job0: (groupid=0, jobs=1): err= 0: pid=105711: Tue Oct 1 20:46:08 2024 00:47:13.578 read: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec) 00:47:13.578 slat (usec): min=2, max=10402, avg=118.12, stdev=667.59 00:47:13.578 clat (usec): min=4259, max=38962, avg=13721.55, stdev=5333.99 00:47:13.578 lat (usec): min=4272, max=38975, avg=13839.67, stdev=5384.08 00:47:13.578 clat percentiles (usec): 00:47:13.578 | 1.00th=[ 6128], 5.00th=[ 9110], 10.00th=[10028], 20.00th=[10421], 00:47:13.578 | 30.00th=[10945], 40.00th=[11994], 50.00th=[12256], 60.00th=[12780], 00:47:13.578 | 70.00th=[13304], 80.00th=[15270], 90.00th=[20579], 95.00th=[26608], 00:47:13.578 | 99.00th=[34866], 99.50th=[36439], 99.90th=[39060], 99.95th=[39060], 00:47:13.578 | 99.99th=[39060] 00:47:13.578 write: IOPS=3923, BW=15.3MiB/s (16.1MB/s)(15.5MiB/1010msec); 0 zone resets 00:47:13.578 slat (usec): min=4, max=8990, avg=138.75, stdev=560.52 00:47:13.578 clat (usec): min=3321, max=38902, avg=19911.83, stdev=8174.75 00:47:13.578 lat (usec): min=3343, max=38909, avg=20050.58, stdev=8234.27 00:47:13.578 clat percentiles (usec): 00:47:13.578 | 1.00th=[ 5145], 5.00th=[ 9241], 10.00th=[ 9896], 20.00th=[10814], 00:47:13.578 | 30.00th=[12387], 40.00th=[18220], 50.00th=[19530], 60.00th=[22676], 00:47:13.578 | 70.00th=[28181], 80.00th=[28967], 90.00th=[30016], 95.00th=[30802], 00:47:13.578 | 99.00th=[31589], 99.50th=[32900], 99.90th=[37487], 99.95th=[39060], 00:47:13.578 | 99.99th=[39060] 00:47:13.578 bw ( KiB/s): min=14304, max=16384, per=29.71%, avg=15344.00, stdev=1470.78, samples=2 00:47:13.578 iops : min= 3576, max= 4096, avg=3836.00, stdev=367.70, samples=2 00:47:13.578 lat (msec) : 4=0.11%, 10=10.02%, 20=60.75%, 50=29.12% 00:47:13.578 cpu : usr=3.27%, sys=8.52%, ctx=496, majf=0, minf=5 00:47:13.578 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:47:13.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:13.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:47:13.579 issued rwts: total=3584,3963,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:13.579 latency : target=0, window=0, percentile=100.00%, depth=128 00:47:13.579 job1: (groupid=0, jobs=1): err= 0: pid=105712: Tue Oct 1 20:46:08 2024 00:47:13.579 read: IOPS=2704, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1003msec) 00:47:13.579 slat (usec): min=6, max=12115, avg=163.51, stdev=857.68 00:47:13.579 clat (usec): min=2278, max=46055, avg=19341.67, stdev=9182.69 00:47:13.579 lat (usec): min=2304, max=46076, avg=19505.18, stdev=9223.42 00:47:13.579 clat percentiles (usec): 00:47:13.579 | 1.00th=[ 5866], 5.00th=[13173], 10.00th=[14091], 20.00th=[14615], 00:47:13.579 | 30.00th=[14746], 40.00th=[14877], 50.00th=[15139], 60.00th=[15533], 00:47:13.579 | 70.00th=[16712], 80.00th=[24511], 90.00th=[36439], 95.00th=[42206], 00:47:13.579 | 99.00th=[45351], 99.50th=[45351], 99.90th=[45876], 99.95th=[45876], 00:47:13.579 | 99.99th=[45876] 00:47:13.579 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:47:13.579 slat (usec): min=11, max=20978, avg=173.16, stdev=933.10 00:47:13.579 clat (usec): min=9651, max=55867, avg=23744.29, stdev=13279.07 00:47:13.579 lat (usec): min=10886, max=55891, avg=23917.46, stdev=13335.57 00:47:13.579 clat percentiles (usec): 00:47:13.579 | 1.00th=[11731], 5.00th=[12911], 10.00th=[13829], 20.00th=[14222], 00:47:13.579 | 30.00th=[14615], 40.00th=[14877], 50.00th=[18220], 60.00th=[19268], 00:47:13.579 | 70.00th=[20841], 80.00th=[38536], 90.00th=[48497], 95.00th=[51643], 00:47:13.579 | 99.00th=[55837], 99.50th=[55837], 99.90th=[55837], 99.95th=[55837], 00:47:13.579 | 99.99th=[55837] 00:47:13.579 bw ( KiB/s): min=12288, max=12288, per=23.79%, avg=12288.00, stdev= 0.00, samples=2 00:47:13.579 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:47:13.579 lat (msec) : 4=0.16%, 10=1.02%, 20=70.75%, 50=23.13%, 100=4.94% 00:47:13.579 cpu : usr=2.89%, sys=8.68%, ctx=330, majf=0, minf=19 00:47:13.579 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:47:13.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:13.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:47:13.579 issued rwts: total=2713,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:13.579 latency : target=0, window=0, percentile=100.00%, depth=128 00:47:13.579 job2: (groupid=0, jobs=1): err= 0: pid=105713: Tue Oct 1 20:46:08 2024 00:47:13.579 read: IOPS=2519, BW=9.84MiB/s (10.3MB/s)(10.0MiB/1016msec) 00:47:13.579 slat (usec): min=2, max=11254, avg=180.09, stdev=881.49 00:47:13.579 clat (usec): min=13510, max=32256, avg=21884.26, stdev=3090.06 00:47:13.579 lat (usec): min=13528, max=32919, avg=22064.35, stdev=3126.71 00:47:13.579 clat percentiles (usec): 00:47:13.579 | 1.00th=[15270], 5.00th=[16909], 10.00th=[17695], 20.00th=[19530], 00:47:13.579 | 30.00th=[20317], 40.00th=[21365], 50.00th=[21890], 60.00th=[22414], 00:47:13.579 | 70.00th=[22676], 80.00th=[24249], 90.00th=[25560], 95.00th=[27657], 00:47:13.579 | 99.00th=[30540], 99.50th=[31065], 99.90th=[32113], 99.95th=[32375], 00:47:13.579 | 99.99th=[32375] 00:47:13.579 write: IOPS=3004, BW=11.7MiB/s (12.3MB/s)(11.9MiB/1016msec); 0 zone resets 00:47:13.579 slat (usec): min=4, max=9578, avg=169.23, stdev=537.93 00:47:13.579 clat (usec): min=12520, max=38190, avg=23640.50, stdev=2995.04 00:47:13.579 lat (usec): min=12552, max=38892, avg=23809.73, stdev=3039.83 00:47:13.579 clat percentiles (usec): 00:47:13.579 | 1.00th=[16581], 5.00th=[18220], 10.00th=[19792], 20.00th=[21890], 00:47:13.579 | 30.00th=[22938], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:47:13.579 | 70.00th=[24511], 80.00th=[25035], 90.00th=[26346], 95.00th=[29492], 00:47:13.579 | 99.00th=[32900], 99.50th=[33817], 99.90th=[35914], 99.95th=[37487], 00:47:13.579 | 99.99th=[38011] 00:47:13.579 bw ( KiB/s): min=11120, max=12263, per=22.63%, avg=11691.50, stdev=808.22, samples=2 00:47:13.579 iops : min= 2780, max= 3065, avg=2922.50, stdev=201.53, samples=2 00:47:13.579 lat (msec) : 20=17.99%, 50=82.01% 00:47:13.579 cpu : usr=2.56%, sys=7.88%, ctx=1207, majf=0, minf=14 00:47:13.579 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:47:13.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:13.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:47:13.579 issued rwts: total=2560,3053,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:13.579 latency : target=0, window=0, percentile=100.00%, depth=128 00:47:13.579 job3: (groupid=0, jobs=1): err= 0: pid=105714: Tue Oct 1 20:46:08 2024 00:47:13.579 read: IOPS=2519, BW=9.84MiB/s (10.3MB/s)(10.0MiB/1016msec) 00:47:13.579 slat (usec): min=3, max=8916, avg=177.13, stdev=824.98 00:47:13.579 clat (usec): min=13843, max=31181, avg=21958.74, stdev=3105.28 00:47:13.579 lat (usec): min=13858, max=31196, avg=22135.87, stdev=3107.89 00:47:13.579 clat percentiles (usec): 00:47:13.579 | 1.00th=[15270], 5.00th=[16581], 10.00th=[17695], 20.00th=[19530], 00:47:13.579 | 30.00th=[20317], 40.00th=[21365], 50.00th=[21627], 60.00th=[22676], 00:47:13.579 | 70.00th=[23725], 80.00th=[24511], 90.00th=[25822], 95.00th=[27395], 00:47:13.579 | 99.00th=[29492], 99.50th=[30540], 99.90th=[31065], 99.95th=[31065], 00:47:13.579 | 99.99th=[31065] 00:47:13.579 write: IOPS=2984, BW=11.7MiB/s (12.2MB/s)(11.8MiB/1016msec); 0 zone resets 00:47:13.579 slat (usec): min=3, max=9437, avg=173.12, stdev=613.41 00:47:13.579 clat (usec): min=14734, max=38279, avg=23724.50, stdev=2944.38 00:47:13.579 lat (usec): min=15308, max=38287, avg=23897.62, stdev=2981.96 00:47:13.579 clat percentiles (usec): 00:47:13.579 | 1.00th=[16450], 5.00th=[19006], 10.00th=[20317], 20.00th=[22414], 00:47:13.579 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:47:13.579 | 70.00th=[24511], 80.00th=[25035], 90.00th=[26084], 95.00th=[28443], 00:47:13.579 | 99.00th=[33162], 99.50th=[36963], 99.90th=[38011], 99.95th=[38011], 00:47:13.579 | 99.99th=[38536] 00:47:13.579 bw ( KiB/s): min=10952, max=12288, per=22.50%, avg=11620.00, stdev=944.69, samples=2 00:47:13.579 iops : min= 2738, max= 3072, avg=2905.00, stdev=236.17, samples=2 00:47:13.579 lat (msec) : 20=17.92%, 50=82.08% 00:47:13.579 cpu : usr=2.36%, sys=7.88%, ctx=1158, majf=0, minf=13 00:47:13.579 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:47:13.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:13.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:47:13.579 issued rwts: total=2560,3032,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:13.579 latency : target=0, window=0, percentile=100.00%, depth=128 00:47:13.579 00:47:13.579 Run status group 0 (all jobs): 00:47:13.579 READ: bw=43.9MiB/s (46.0MB/s), 9.84MiB/s-13.9MiB/s (10.3MB/s-14.5MB/s), io=44.6MiB (46.8MB), run=1003-1016msec 00:47:13.579 WRITE: bw=50.4MiB/s (52.9MB/s), 11.7MiB/s-15.3MiB/s (12.2MB/s-16.1MB/s), io=51.2MiB (53.7MB), run=1003-1016msec 00:47:13.579 00:47:13.579 Disk stats (read/write): 00:47:13.579 nvme0n1: ios=3121/3255, merge=0/0, ticks=40674/62522, in_queue=103196, util=88.37% 00:47:13.579 nvme0n2: ios=2475/2560, merge=0/0, ticks=12169/13305, in_queue=25474, util=89.06% 00:47:13.579 nvme0n3: ios=2170/2560, merge=0/0, ticks=23646/28027, in_queue=51673, util=89.03% 00:47:13.579 nvme0n4: ios=2177/2560, merge=0/0, ticks=23507/28229, in_queue=51736, util=89.58% 00:47:13.579 20:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:47:13.579 20:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=105727 00:47:13.579 20:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:47:13.579 20:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:47:13.579 [global] 00:47:13.579 thread=1 00:47:13.579 invalidate=1 00:47:13.579 rw=read 00:47:13.579 time_based=1 00:47:13.579 runtime=10 00:47:13.579 ioengine=libaio 00:47:13.579 direct=1 00:47:13.579 bs=4096 00:47:13.579 iodepth=1 00:47:13.579 norandommap=1 00:47:13.579 numjobs=1 00:47:13.579 00:47:13.579 [job0] 00:47:13.579 filename=/dev/nvme0n1 00:47:13.579 [job1] 00:47:13.579 filename=/dev/nvme0n2 00:47:13.579 [job2] 00:47:13.579 filename=/dev/nvme0n3 00:47:13.579 [job3] 00:47:13.579 filename=/dev/nvme0n4 00:47:13.579 Could not set queue depth (nvme0n1) 00:47:13.579 Could not set queue depth (nvme0n2) 00:47:13.579 Could not set queue depth (nvme0n3) 00:47:13.579 Could not set queue depth (nvme0n4) 00:47:13.579 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:47:13.579 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:47:13.579 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:47:13.579 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:47:13.579 fio-3.35 00:47:13.579 Starting 4 threads 00:47:16.861 20:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:47:16.861 fio: pid=105777, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:47:16.861 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=46858240, buflen=4096 00:47:16.861 20:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:47:16.861 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=40366080, buflen=4096 00:47:16.861 fio: pid=105776, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:47:17.119 20:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:47:17.119 20:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:47:17.378 fio: pid=105773, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:47:17.378 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=60948480, buflen=4096 00:47:17.378 20:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:47:17.378 20:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:47:17.637 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=49889280, buflen=4096 00:47:17.637 fio: pid=105774, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:47:17.637 20:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:47:17.637 20:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:47:17.637 00:47:17.637 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=105773: Tue Oct 1 20:46:12 2024 00:47:17.637 read: IOPS=4234, BW=16.5MiB/s (17.3MB/s)(58.1MiB/3514msec) 00:47:17.637 slat (usec): min=7, max=12653, avg=18.59, stdev=180.80 00:47:17.637 clat (usec): min=73, max=6840, avg=216.09, stdev=77.94 00:47:17.637 lat (usec): min=174, max=13023, avg=234.68, stdev=198.28 00:47:17.637 clat percentiles (usec): 00:47:17.637 | 1.00th=[ 176], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 192], 00:47:17.637 | 30.00th=[ 196], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 210], 00:47:17.637 | 70.00th=[ 217], 80.00th=[ 225], 90.00th=[ 251], 95.00th=[ 302], 00:47:17.637 | 99.00th=[ 363], 99.50th=[ 379], 99.90th=[ 502], 99.95th=[ 898], 00:47:17.637 | 99.99th=[ 3261] 00:47:17.637 bw ( KiB/s): min=15848, max=18352, per=35.18%, avg=17708.00, stdev=955.52, samples=6 00:47:17.637 iops : min= 3962, max= 4588, avg=4427.00, stdev=238.88, samples=6 00:47:17.637 lat (usec) : 100=0.01%, 250=90.01%, 500=9.87%, 750=0.05%, 1000=0.01% 00:47:17.637 lat (msec) : 2=0.02%, 4=0.02%, 10=0.01% 00:47:17.637 cpu : usr=1.02%, sys=5.49%, ctx=14947, majf=0, minf=1 00:47:17.637 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:17.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:17.637 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:17.637 issued rwts: total=14881,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:17.637 latency : target=0, window=0, percentile=100.00%, depth=1 00:47:17.637 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=105774: Tue Oct 1 20:46:12 2024 00:47:17.637 read: IOPS=3169, BW=12.4MiB/s (13.0MB/s)(47.6MiB/3843msec) 00:47:17.637 slat (usec): min=7, max=10345, avg=20.43, stdev=170.63 00:47:17.637 clat (usec): min=57, max=6753, avg=293.26, stdev=90.93 00:47:17.637 lat (usec): min=169, max=10695, avg=313.69, stdev=193.89 00:47:17.637 clat percentiles (usec): 00:47:17.637 | 1.00th=[ 172], 5.00th=[ 190], 10.00th=[ 219], 20.00th=[ 269], 00:47:17.637 | 30.00th=[ 285], 40.00th=[ 293], 50.00th=[ 302], 60.00th=[ 306], 00:47:17.637 | 70.00th=[ 314], 80.00th=[ 322], 90.00th=[ 334], 95.00th=[ 347], 00:47:17.637 | 99.00th=[ 420], 99.50th=[ 449], 99.90th=[ 971], 99.95th=[ 1844], 00:47:17.637 | 99.99th=[ 3097] 00:47:17.637 bw ( KiB/s): min=11752, max=13011, per=24.58%, avg=12371.86, stdev=383.08, samples=7 00:47:17.637 iops : min= 2938, max= 3252, avg=3092.86, stdev=95.56, samples=7 00:47:17.637 lat (usec) : 100=0.04%, 250=15.43%, 500=84.30%, 750=0.11%, 1000=0.02% 00:47:17.637 lat (msec) : 2=0.06%, 4=0.02%, 10=0.01% 00:47:17.637 cpu : usr=0.96%, sys=4.45%, ctx=12263, majf=0, minf=1 00:47:17.637 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:17.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:17.637 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:17.637 issued rwts: total=12181,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:17.637 latency : target=0, window=0, percentile=100.00%, depth=1 00:47:17.637 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=105776: Tue Oct 1 20:46:12 2024 00:47:17.637 read: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(38.5MiB/3237msec) 00:47:17.637 slat (usec): min=4, max=18839, avg=18.54, stdev=204.89 00:47:17.637 clat (usec): min=197, max=1780, avg=308.28, stdev=39.08 00:47:17.637 lat (usec): min=210, max=19110, avg=326.82, stdev=207.96 00:47:17.637 clat percentiles (usec): 00:47:17.637 | 1.00th=[ 251], 5.00th=[ 269], 10.00th=[ 277], 20.00th=[ 289], 00:47:17.637 | 30.00th=[ 293], 40.00th=[ 302], 50.00th=[ 306], 60.00th=[ 310], 00:47:17.637 | 70.00th=[ 318], 80.00th=[ 326], 90.00th=[ 338], 95.00th=[ 351], 00:47:17.637 | 99.00th=[ 408], 99.50th=[ 429], 99.90th=[ 494], 99.95th=[ 1319], 00:47:17.637 | 99.99th=[ 1778] 00:47:17.637 bw ( KiB/s): min=12168, max=12488, per=24.52%, avg=12341.33, stdev=151.34, samples=6 00:47:17.637 iops : min= 3042, max= 3122, avg=3085.33, stdev=37.83, samples=6 00:47:17.637 lat (usec) : 250=0.89%, 500=99.02%, 750=0.03% 00:47:17.637 lat (msec) : 2=0.05% 00:47:17.637 cpu : usr=0.96%, sys=3.96%, ctx=10078, majf=0, minf=2 00:47:17.637 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:17.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:17.637 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:17.637 issued rwts: total=9856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:17.637 latency : target=0, window=0, percentile=100.00%, depth=1 00:47:17.637 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=105777: Tue Oct 1 20:46:12 2024 00:47:17.637 read: IOPS=3862, BW=15.1MiB/s (15.8MB/s)(44.7MiB/2962msec) 00:47:17.637 slat (usec): min=11, max=120, avg=15.78, stdev= 4.86 00:47:17.637 clat (usec): min=190, max=2226, avg=241.45, stdev=39.11 00:47:17.637 lat (usec): min=204, max=2252, avg=257.23, stdev=39.88 00:47:17.637 clat percentiles (usec): 00:47:17.637 | 1.00th=[ 204], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 225], 00:47:17.637 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 239], 60.00th=[ 245], 00:47:17.637 | 70.00th=[ 249], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 277], 00:47:17.637 | 99.00th=[ 310], 99.50th=[ 334], 99.90th=[ 437], 99.95th=[ 660], 00:47:17.637 | 99.99th=[ 2180] 00:47:17.637 bw ( KiB/s): min=14848, max=15776, per=30.68%, avg=15440.00, stdev=385.04, samples=5 00:47:17.637 iops : min= 3712, max= 3944, avg=3860.00, stdev=96.26, samples=5 00:47:17.637 lat (usec) : 250=71.17%, 500=28.73%, 750=0.06% 00:47:17.637 lat (msec) : 2=0.02%, 4=0.02% 00:47:17.637 cpu : usr=1.08%, sys=5.23%, ctx=11441, majf=0, minf=2 00:47:17.637 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:17.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:17.637 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:17.637 issued rwts: total=11441,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:17.637 latency : target=0, window=0, percentile=100.00%, depth=1 00:47:17.637 00:47:17.637 Run status group 0 (all jobs): 00:47:17.637 READ: bw=49.2MiB/s (51.5MB/s), 11.9MiB/s-16.5MiB/s (12.5MB/s-17.3MB/s), io=189MiB (198MB), run=2962-3843msec 00:47:17.637 00:47:17.637 Disk stats (read/write): 00:47:17.637 nvme0n1: ios=14352/0, merge=0/0, ticks=3131/0, in_queue=3131, util=94.88% 00:47:17.637 nvme0n2: ios=11167/0, merge=0/0, ticks=3399/0, in_queue=3399, util=95.45% 00:47:17.637 nvme0n3: ios=9482/0, merge=0/0, ticks=2957/0, in_queue=2957, util=95.89% 00:47:17.637 nvme0n4: ios=11053/0, merge=0/0, ticks=2651/0, in_queue=2651, util=96.69% 00:47:17.896 20:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:47:17.896 20:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:47:18.154 20:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:47:18.154 20:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:47:18.412 20:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:47:18.412 20:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:47:18.980 20:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:47:18.980 20:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:47:18.980 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:47:18.980 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 105727 00:47:18.980 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:47:18.980 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:47:19.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:47:19.238 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:47:19.238 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:47:19.238 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:47:19.238 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:47:19.238 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:47:19.238 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:47:19.238 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:47:19.238 nvmf hotplug test: fio failed as expected 00:47:19.238 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:47:19.238 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:47:19.238 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:47:19.497 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:47:19.497 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:47:19.497 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:47:19.497 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:47:19.497 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:47:19.497 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:47:19.497 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:47:19.497 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:47:19.497 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:47:19.497 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:47:19.497 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:47:19.497 rmmod nvme_tcp 00:47:19.497 rmmod nvme_fabrics 00:47:19.497 rmmod nvme_keyring 00:47:19.497 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:47:19.497 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:47:19.497 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:47:19.497 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 105257 ']' 00:47:19.497 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 105257 00:47:19.497 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 105257 ']' 00:47:19.497 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 105257 00:47:19.497 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:47:19.497 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:47:19.497 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 105257 00:47:19.497 killing process with pid 105257 00:47:19.497 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:47:19.497 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:47:19.497 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 105257' 00:47:19.497 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 105257 00:47:19.497 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 105257 00:47:19.756 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:47:19.756 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:47:19.756 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:47:19.756 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:47:19.756 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:47:19.756 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:47:19.756 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:47:19.756 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:47:19.756 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:47:19.756 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:47:19.756 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:47:19.756 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:47:19.756 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:47:19.756 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:47:19.756 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:47:19.756 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:47:19.756 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:47:19.756 20:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:47:19.756 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:47:20.015 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:47:20.015 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:20.015 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:20.015 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:47:20.015 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:20.015 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:20.015 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:20.015 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:47:20.015 00:47:20.015 real 0m19.706s 00:47:20.015 user 0m58.738s 00:47:20.015 sys 0m12.344s 00:47:20.015 ************************************ 00:47:20.015 END TEST nvmf_fio_target 00:47:20.015 ************************************ 00:47:20.015 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:47:20.015 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:47:20.015 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:47:20.015 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:47:20.015 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:47:20.015 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:47:20.016 ************************************ 00:47:20.016 START TEST nvmf_bdevio 00:47:20.016 ************************************ 00:47:20.016 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:47:20.016 * Looking for test storage... 00:47:20.016 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:47:20.016 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:47:20.016 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:47:20.016 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:47:20.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:20.276 --rc genhtml_branch_coverage=1 00:47:20.276 --rc genhtml_function_coverage=1 00:47:20.276 --rc genhtml_legend=1 00:47:20.276 --rc geninfo_all_blocks=1 00:47:20.276 --rc geninfo_unexecuted_blocks=1 00:47:20.276 00:47:20.276 ' 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:47:20.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:20.276 --rc genhtml_branch_coverage=1 00:47:20.276 --rc genhtml_function_coverage=1 00:47:20.276 --rc genhtml_legend=1 00:47:20.276 --rc geninfo_all_blocks=1 00:47:20.276 --rc geninfo_unexecuted_blocks=1 00:47:20.276 00:47:20.276 ' 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:47:20.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:20.276 --rc genhtml_branch_coverage=1 00:47:20.276 --rc genhtml_function_coverage=1 00:47:20.276 --rc genhtml_legend=1 00:47:20.276 --rc geninfo_all_blocks=1 00:47:20.276 --rc geninfo_unexecuted_blocks=1 00:47:20.276 00:47:20.276 ' 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:47:20.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:20.276 --rc genhtml_branch_coverage=1 00:47:20.276 --rc genhtml_function_coverage=1 00:47:20.276 --rc genhtml_legend=1 00:47:20.276 --rc geninfo_all_blocks=1 00:47:20.276 --rc geninfo_unexecuted_blocks=1 00:47:20.276 00:47:20.276 ' 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:20.276 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@458 -- # nvmf_veth_init 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:47:20.277 Cannot find device "nvmf_init_br" 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:47:20.277 Cannot find device "nvmf_init_br2" 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:47:20.277 Cannot find device "nvmf_tgt_br" 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:47:20.277 Cannot find device "nvmf_tgt_br2" 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:47:20.277 Cannot find device "nvmf_init_br" 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:47:20.277 Cannot find device "nvmf_init_br2" 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:47:20.277 Cannot find device "nvmf_tgt_br" 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:47:20.277 Cannot find device "nvmf_tgt_br2" 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:47:20.277 Cannot find device "nvmf_br" 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:47:20.277 Cannot find device "nvmf_init_if" 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:47:20.277 Cannot find device "nvmf_init_if2" 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:20.277 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:20.277 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:47:20.277 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:47:20.537 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:47:20.537 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:47:20.537 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:47:20.537 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:47:20.537 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:47:20.537 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:47:20.537 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:47:20.537 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:47:20.537 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:47:20.537 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:47:20.537 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:47:20.537 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:47:20.537 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:47:20.537 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:47:20.537 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:47:20.537 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:47:20.537 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:47:20.537 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:47:20.537 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:47:20.538 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:47:20.538 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:47:20.538 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:47:20.538 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:47:20.538 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:47:20.538 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:47:20.538 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:47:20.538 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:47:20.538 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:47:20.538 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:47:20.538 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:47:20.538 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:47:20.538 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:47:20.538 00:47:20.538 --- 10.0.0.3 ping statistics --- 00:47:20.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:20.538 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:47:20.538 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:47:20.538 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:47:20.538 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:47:20.538 00:47:20.538 --- 10.0.0.4 ping statistics --- 00:47:20.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:20.538 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:47:20.538 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:47:20.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:20.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:47:20.538 00:47:20.538 --- 10.0.0.1 ping statistics --- 00:47:20.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:20.538 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:47:20.538 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:47:20.538 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:20.538 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:47:20.538 00:47:20.538 --- 10.0.0.2 ping statistics --- 00:47:20.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:20.538 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:47:20.538 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:20.538 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@459 -- # return 0 00:47:20.538 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:47:20.538 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:20.538 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:47:20.538 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:47:20.538 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:20.538 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:47:20.538 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:47:20.538 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:47:20.538 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:47:20.538 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:47:20.538 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:47:20.538 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:47:20.538 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=106151 00:47:20.538 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 106151 00:47:20.538 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 106151 ']' 00:47:20.538 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:20.538 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:47:20.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:20.538 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:20.538 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:47:20.538 20:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:47:20.798 [2024-10-01 20:46:15.797989] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:47:20.798 [2024-10-01 20:46:15.799155] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:47:20.798 [2024-10-01 20:46:15.799263] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:20.798 [2024-10-01 20:46:15.935646] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:47:20.798 [2024-10-01 20:46:15.998085] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:20.798 [2024-10-01 20:46:15.998195] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:20.798 [2024-10-01 20:46:15.998208] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:20.798 [2024-10-01 20:46:15.998217] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:20.798 [2024-10-01 20:46:15.998224] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:20.798 [2024-10-01 20:46:15.998878] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:47:20.798 [2024-10-01 20:46:15.999016] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:47:20.798 [2024-10-01 20:46:15.999166] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:47:20.798 [2024-10-01 20:46:15.999166] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:47:21.058 [2024-10-01 20:46:16.097245] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:47:21.058 [2024-10-01 20:46:16.097363] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:47:21.058 [2024-10-01 20:46:16.097976] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:47:21.058 [2024-10-01 20:46:16.098031] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:47:21.058 [2024-10-01 20:46:16.098224] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:47:21.058 20:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:47:21.058 20:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:47:21.058 20:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:47:21.058 20:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:47:21.058 20:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:47:21.058 20:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:21.058 20:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:47:21.059 20:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:21.059 20:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:47:21.059 [2024-10-01 20:46:16.176390] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:21.059 20:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:21.059 20:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:47:21.059 20:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:21.059 20:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:47:21.059 Malloc0 00:47:21.059 20:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:21.059 20:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:47:21.059 20:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:21.059 20:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:47:21.059 20:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:21.059 20:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:47:21.059 20:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:21.059 20:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:47:21.059 20:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:21.059 20:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:47:21.059 20:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:21.059 20:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:47:21.059 [2024-10-01 20:46:16.236386] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:47:21.059 20:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:21.059 20:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:47:21.059 20:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:47:21.059 20:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:47:21.059 20:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:47:21.059 20:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:47:21.059 20:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:47:21.059 { 00:47:21.059 "params": { 00:47:21.059 "name": "Nvme$subsystem", 00:47:21.059 "trtype": "$TEST_TRANSPORT", 00:47:21.059 "traddr": "$NVMF_FIRST_TARGET_IP", 00:47:21.059 "adrfam": "ipv4", 00:47:21.059 "trsvcid": "$NVMF_PORT", 00:47:21.059 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:47:21.059 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:47:21.059 "hdgst": ${hdgst:-false}, 00:47:21.059 "ddgst": ${ddgst:-false} 00:47:21.059 }, 00:47:21.059 "method": "bdev_nvme_attach_controller" 00:47:21.059 } 00:47:21.059 EOF 00:47:21.059 )") 00:47:21.059 20:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:47:21.059 20:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:47:21.059 20:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:47:21.059 20:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:47:21.059 "params": { 00:47:21.059 "name": "Nvme1", 00:47:21.059 "trtype": "tcp", 00:47:21.059 "traddr": "10.0.0.3", 00:47:21.059 "adrfam": "ipv4", 00:47:21.059 "trsvcid": "4420", 00:47:21.059 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:47:21.059 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:47:21.059 "hdgst": false, 00:47:21.059 "ddgst": false 00:47:21.059 }, 00:47:21.059 "method": "bdev_nvme_attach_controller" 00:47:21.059 }' 00:47:21.059 [2024-10-01 20:46:16.301911] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:47:21.059 [2024-10-01 20:46:16.302021] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106193 ] 00:47:21.319 [2024-10-01 20:46:16.443281] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:47:21.319 [2024-10-01 20:46:16.514836] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:47:21.319 [2024-10-01 20:46:16.514971] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:47:21.319 [2024-10-01 20:46:16.514985] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:47:21.578 I/O targets: 00:47:21.578 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:47:21.578 00:47:21.578 00:47:21.578 CUnit - A unit testing framework for C - Version 2.1-3 00:47:21.578 http://cunit.sourceforge.net/ 00:47:21.578 00:47:21.578 00:47:21.578 Suite: bdevio tests on: Nvme1n1 00:47:21.578 Test: blockdev write read block ...passed 00:47:21.578 Test: blockdev write zeroes read block ...passed 00:47:21.578 Test: blockdev write zeroes read no split ...passed 00:47:21.578 Test: blockdev write zeroes read split ...passed 00:47:21.578 Test: blockdev write zeroes read split partial ...passed 00:47:21.578 Test: blockdev reset ...[2024-10-01 20:46:16.812255] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:47:21.578 [2024-10-01 20:46:16.812623] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10c0d20 (9): Bad file descriptor 00:47:21.578 [2024-10-01 20:46:16.816472] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:47:21.578 passed 00:47:21.578 Test: blockdev write read 8 blocks ...passed 00:47:21.578 Test: blockdev write read size > 128k ...passed 00:47:21.578 Test: blockdev write read invalid size ...passed 00:47:21.837 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:47:21.837 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:47:21.837 Test: blockdev write read max offset ...passed 00:47:21.837 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:47:21.837 Test: blockdev writev readv 8 blocks ...passed 00:47:21.837 Test: blockdev writev readv 30 x 1block ...passed 00:47:21.837 Test: blockdev writev readv block ...passed 00:47:21.837 Test: blockdev writev readv size > 128k ...passed 00:47:21.837 Test: blockdev writev readv size > 128k in two iovs ...passed 00:47:21.837 Test: blockdev comparev and writev ...[2024-10-01 20:46:16.989781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:47:21.837 [2024-10-01 20:46:16.989839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:47:21.837 [2024-10-01 20:46:16.989859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:47:21.837 [2024-10-01 20:46:16.989870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:47:21.837 [2024-10-01 20:46:16.990353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:47:21.837 [2024-10-01 20:46:16.990382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:47:21.837 [2024-10-01 20:46:16.990401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:47:21.837 [2024-10-01 20:46:16.990412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:47:21.837 [2024-10-01 20:46:16.990860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:47:21.837 [2024-10-01 20:46:16.990888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:47:21.837 [2024-10-01 20:46:16.990905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:47:21.837 [2024-10-01 20:46:16.990916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:47:21.837 [2024-10-01 20:46:16.991340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:47:21.837 [2024-10-01 20:46:16.991368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:47:21.837 [2024-10-01 20:46:16.991386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:47:21.837 [2024-10-01 20:46:16.991396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:47:21.837 passed 00:47:21.837 Test: blockdev nvme passthru rw ...passed 00:47:21.837 Test: blockdev nvme passthru vendor specific ...[2024-10-01 20:46:17.074440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:47:21.837 [2024-10-01 20:46:17.074469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:47:21.838 [2024-10-01 20:46:17.074617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:47:21.838 [2024-10-01 20:46:17.074634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:47:21.838 [2024-10-01 20:46:17.074756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:47:21.838 [2024-10-01 20:46:17.074780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:47:21.838 [2024-10-01 20:46:17.074904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:47:21.838 [2024-10-01 20:46:17.074926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:47:21.838 passed 00:47:22.098 Test: blockdev nvme admin passthru ...passed 00:47:22.098 Test: blockdev copy ...passed 00:47:22.098 00:47:22.098 Run Summary: Type Total Ran Passed Failed Inactive 00:47:22.098 suites 1 1 n/a 0 0 00:47:22.098 tests 23 23 23 0 0 00:47:22.098 asserts 152 152 152 0 n/a 00:47:22.098 00:47:22.098 Elapsed time = 0.857 seconds 00:47:22.098 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:47:22.098 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:22.098 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:47:22.098 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:22.098 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:47:22.098 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:47:22.098 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:47:22.098 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:47:22.357 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:47:22.357 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:47:22.357 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:47:22.357 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:47:22.357 rmmod nvme_tcp 00:47:22.357 rmmod nvme_fabrics 00:47:22.357 rmmod nvme_keyring 00:47:22.357 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:47:22.357 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:47:22.357 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:47:22.357 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 106151 ']' 00:47:22.357 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 106151 00:47:22.357 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 106151 ']' 00:47:22.357 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 106151 00:47:22.357 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:47:22.357 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:47:22.357 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 106151 00:47:22.357 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:47:22.357 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:47:22.357 killing process with pid 106151 00:47:22.357 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 106151' 00:47:22.357 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 106151 00:47:22.357 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 106151 00:47:22.616 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:47:22.616 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:47:22.616 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:47:22.616 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:47:22.616 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:47:22.616 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:47:22.616 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:47:22.616 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:47:22.616 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:47:22.616 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:47:22.616 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:47:22.616 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:47:22.616 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:47:22.616 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:47:22.616 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:47:22.616 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:47:22.616 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:47:22.616 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:47:22.616 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:47:22.875 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:47:22.875 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:22.875 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:22.875 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:47:22.875 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:22.875 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:47:22.875 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:22.875 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:47:22.875 00:47:22.875 real 0m2.828s 00:47:22.875 user 0m7.257s 00:47:22.875 sys 0m1.195s 00:47:22.875 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:47:22.875 20:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:47:22.875 ************************************ 00:47:22.875 END TEST nvmf_bdevio 00:47:22.876 ************************************ 00:47:22.876 20:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:47:22.876 00:47:22.876 real 3m31.311s 00:47:22.876 user 9m32.321s 00:47:22.876 sys 1m24.501s 00:47:22.876 20:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:47:22.876 20:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:47:22.876 ************************************ 00:47:22.876 END TEST nvmf_target_core_interrupt_mode 00:47:22.876 ************************************ 00:47:22.876 20:46:18 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:47:22.876 20:46:18 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:47:22.876 20:46:18 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:47:22.876 20:46:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:22.876 ************************************ 00:47:22.876 START TEST nvmf_interrupt 00:47:22.876 ************************************ 00:47:22.876 20:46:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:47:23.135 * Looking for test storage... 00:47:23.135 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:47:23.135 20:46:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:47:23.135 20:46:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lcov --version 00:47:23.135 20:46:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:47:23.135 20:46:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:47:23.135 20:46:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:23.135 20:46:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:23.135 20:46:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:47:23.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:23.136 --rc genhtml_branch_coverage=1 00:47:23.136 --rc genhtml_function_coverage=1 00:47:23.136 --rc genhtml_legend=1 00:47:23.136 --rc geninfo_all_blocks=1 00:47:23.136 --rc geninfo_unexecuted_blocks=1 00:47:23.136 00:47:23.136 ' 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:47:23.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:23.136 --rc genhtml_branch_coverage=1 00:47:23.136 --rc genhtml_function_coverage=1 00:47:23.136 --rc genhtml_legend=1 00:47:23.136 --rc geninfo_all_blocks=1 00:47:23.136 --rc geninfo_unexecuted_blocks=1 00:47:23.136 00:47:23.136 ' 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:47:23.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:23.136 --rc genhtml_branch_coverage=1 00:47:23.136 --rc genhtml_function_coverage=1 00:47:23.136 --rc genhtml_legend=1 00:47:23.136 --rc geninfo_all_blocks=1 00:47:23.136 --rc geninfo_unexecuted_blocks=1 00:47:23.136 00:47:23.136 ' 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:47:23.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:23.136 --rc genhtml_branch_coverage=1 00:47:23.136 --rc genhtml_function_coverage=1 00:47:23.136 --rc genhtml_legend=1 00:47:23.136 --rc geninfo_all_blocks=1 00:47:23.136 --rc geninfo_unexecuted_blocks=1 00:47:23.136 00:47:23.136 ' 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # prepare_net_devs 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # local -g is_hw=no 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # remove_spdk_ns 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@458 -- # nvmf_veth_init 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:47:23.136 Cannot find device "nvmf_init_br" 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # true 00:47:23.136 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:47:23.136 Cannot find device "nvmf_init_br2" 00:47:23.137 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # true 00:47:23.137 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:47:23.137 Cannot find device "nvmf_tgt_br" 00:47:23.137 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # true 00:47:23.137 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:47:23.137 Cannot find device "nvmf_tgt_br2" 00:47:23.137 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # true 00:47:23.137 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:47:23.137 Cannot find device "nvmf_init_br" 00:47:23.137 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # true 00:47:23.137 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:47:23.137 Cannot find device "nvmf_init_br2" 00:47:23.137 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # true 00:47:23.137 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:47:23.137 Cannot find device "nvmf_tgt_br" 00:47:23.137 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # true 00:47:23.137 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:47:23.137 Cannot find device "nvmf_tgt_br2" 00:47:23.137 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # true 00:47:23.137 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:47:23.395 Cannot find device "nvmf_br" 00:47:23.395 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # true 00:47:23.395 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:47:23.395 Cannot find device "nvmf_init_if" 00:47:23.395 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # true 00:47:23.395 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:47:23.395 Cannot find device "nvmf_init_if2" 00:47:23.395 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # true 00:47:23.395 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:23.395 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:23.395 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # true 00:47:23.395 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:23.395 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:23.395 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # true 00:47:23.395 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:47:23.395 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:47:23.395 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:47:23.395 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:47:23.395 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:47:23.396 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:47:23.396 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:47:23.396 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:47:23.396 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:47:23.396 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:47:23.396 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:47:23.396 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:47:23.396 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:47:23.396 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:47:23.396 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:47:23.396 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:47:23.396 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:47:23.396 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:47:23.396 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:47:23.396 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:47:23.396 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:47:23.396 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:47:23.396 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:47:23.396 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:47:23.396 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:47:23.655 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:47:23.655 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:47:23.655 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:47:23.655 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:47:23.655 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:47:23.655 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:47:23.655 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:47:23.655 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:47:23.655 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:47:23.655 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:47:23.655 00:47:23.655 --- 10.0.0.3 ping statistics --- 00:47:23.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:23.655 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:47:23.655 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:47:23.655 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:47:23.655 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.086 ms 00:47:23.655 00:47:23.655 --- 10.0.0.4 ping statistics --- 00:47:23.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:23.655 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:47:23.655 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:47:23.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:23.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:47:23.655 00:47:23.655 --- 10.0.0.1 ping statistics --- 00:47:23.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:23.655 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:47:23.655 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:47:23.655 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:23.655 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:47:23.655 00:47:23.655 --- 10.0.0.2 ping statistics --- 00:47:23.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:23.655 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:47:23.655 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:23.655 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@459 -- # return 0 00:47:23.655 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:47:23.655 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:23.655 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:47:23.655 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:47:23.655 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:23.655 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:47:23.655 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:47:23.655 20:46:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:47:23.655 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:47:23.655 20:46:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:47:23.655 20:46:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:47:23.655 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # nvmfpid=106438 00:47:23.655 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # waitforlisten 106438 00:47:23.655 20:46:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 106438 ']' 00:47:23.655 20:46:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:47:23.655 20:46:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:23.655 20:46:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:47:23.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:23.655 20:46:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:23.655 20:46:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:47:23.655 20:46:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:47:23.655 [2024-10-01 20:46:18.786134] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:47:23.655 [2024-10-01 20:46:18.787425] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:47:23.655 [2024-10-01 20:46:18.787512] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:23.934 [2024-10-01 20:46:18.927711] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:47:23.934 [2024-10-01 20:46:18.999441] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:23.934 [2024-10-01 20:46:18.999514] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:23.934 [2024-10-01 20:46:18.999539] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:23.934 [2024-10-01 20:46:18.999549] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:23.934 [2024-10-01 20:46:18.999558] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:23.934 [2024-10-01 20:46:19.002164] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:47:23.935 [2024-10-01 20:46:19.002208] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:47:23.935 [2024-10-01 20:46:19.101303] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:47:23.935 [2024-10-01 20:46:19.101641] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:47:23.935 [2024-10-01 20:46:19.101760] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:47:23.935 20:46:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:47:23.935 20:46:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:47:23.935 20:46:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:47:23.935 20:46:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:47:23.935 20:46:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:47:23.935 20:46:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:23.935 20:46:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:47:23.935 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:47:23.935 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:47:23.935 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:47:24.194 5000+0 records in 00:47:24.194 5000+0 records out 00:47:24.194 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0320146 s, 320 MB/s 00:47:24.194 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile AIO0 2048 00:47:24.194 20:46:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:24.194 20:46:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:47:24.194 AIO0 00:47:24.194 20:46:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:24.194 20:46:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:47:24.194 20:46:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:24.194 20:46:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:47:24.194 [2024-10-01 20:46:19.263058] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:24.194 20:46:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:24.194 20:46:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:47:24.194 20:46:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:24.194 20:46:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:47:24.194 20:46:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:24.194 20:46:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:47:24.194 20:46:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:24.194 20:46:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:47:24.194 20:46:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:24.194 20:46:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:47:24.194 20:46:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:24.194 20:46:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:47:24.194 [2024-10-01 20:46:19.291155] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:47:24.194 20:46:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:24.194 20:46:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:47:24.194 20:46:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 106438 0 00:47:24.194 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106438 0 idle 00:47:24.194 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106438 00:47:24.194 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:47:24.194 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:47:24.194 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:47:24.194 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:47:24.194 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:47:24.194 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:47:24.194 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:47:24.194 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:47:24.194 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:47:24.194 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106438 -w 256 00:47:24.194 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106438 root 20 0 64.2g 44928 32640 S 6.7 0.4 0:00.28 reactor_0' 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106438 root 20 0 64.2g 44928 32640 S 6.7 0.4 0:00.28 reactor_0 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 106438 1 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106438 1 idle 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106438 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106438 -w 256 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106448 root 20 0 64.2g 44928 32640 S 0.0 0.4 0:00.00 reactor_1' 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106448 root 20 0 64.2g 44928 32640 S 0.0 0.4 0:00.00 reactor_1 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=106498 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 106438 0 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 106438 0 busy 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106438 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106438 -w 256 00:47:24.454 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:47:24.714 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106438 root 20 0 64.2g 44928 32640 S 0.0 0.4 0:00.28 reactor_0' 00:47:24.714 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106438 root 20 0 64.2g 44928 32640 S 0.0 0.4 0:00.28 reactor_0 00:47:24.714 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:47:24.714 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:47:24.714 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:47:24.714 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:47:24.714 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:47:24.714 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:47:24.714 20:46:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:47:25.649 20:46:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:47:25.649 20:46:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:47:25.649 20:46:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106438 -w 256 00:47:25.649 20:46:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:47:25.907 20:46:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106438 root 20 0 64.2g 46208 33024 R 99.9 0.4 0:01.65 reactor_0' 00:47:25.907 20:46:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106438 root 20 0 64.2g 46208 33024 R 99.9 0.4 0:01.65 reactor_0 00:47:25.907 20:46:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:47:25.907 20:46:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:47:25.907 20:46:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:47:25.907 20:46:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:47:25.907 20:46:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:47:25.907 20:46:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:47:25.907 20:46:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:47:25.907 20:46:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:47:25.907 20:46:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:47:25.907 20:46:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:47:25.907 20:46:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 106438 1 00:47:25.907 20:46:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 106438 1 busy 00:47:25.907 20:46:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106438 00:47:25.907 20:46:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:47:25.907 20:46:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:47:25.907 20:46:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:47:25.907 20:46:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:47:25.907 20:46:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:47:25.907 20:46:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:47:25.907 20:46:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:47:25.907 20:46:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:47:25.907 20:46:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106438 -w 256 00:47:25.907 20:46:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:47:26.166 20:46:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106448 root 20 0 64.2g 46208 33024 R 68.8 0.4 0:00.81 reactor_1' 00:47:26.166 20:46:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106448 root 20 0 64.2g 46208 33024 R 68.8 0.4 0:00.81 reactor_1 00:47:26.166 20:46:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:47:26.166 20:46:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:47:26.166 20:46:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=68.8 00:47:26.166 20:46:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=68 00:47:26.166 20:46:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:47:26.166 20:46:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:47:26.166 20:46:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:47:26.166 20:46:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:47:26.166 20:46:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 106498 00:47:36.142 Initializing NVMe Controllers 00:47:36.142 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:47:36.142 Controller IO queue size 256, less than required. 00:47:36.142 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:47:36.142 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:47:36.142 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:47:36.142 Initialization complete. Launching workers. 00:47:36.142 ======================================================== 00:47:36.142 Latency(us) 00:47:36.142 Device Information : IOPS MiB/s Average min max 00:47:36.142 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 6406.20 25.02 40025.97 7851.99 87404.31 00:47:36.142 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 6595.90 25.77 38866.56 6302.33 81591.89 00:47:36.142 ======================================================== 00:47:36.142 Total : 13002.09 50.79 39437.80 6302.33 87404.31 00:47:36.142 00:47:36.142 20:46:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:47:36.142 20:46:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 106438 0 00:47:36.142 20:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106438 0 idle 00:47:36.142 20:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106438 00:47:36.142 20:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:47:36.142 20:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:47:36.142 20:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:47:36.142 20:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:47:36.142 20:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:47:36.142 20:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:47:36.142 20:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:47:36.142 20:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:47:36.142 20:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:47:36.142 20:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:47:36.142 20:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106438 -w 256 00:47:36.142 20:46:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106438 root 20 0 64.2g 46208 33024 S 0.0 0.4 0:13.67 reactor_0' 00:47:36.142 20:46:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106438 root 20 0 64.2g 46208 33024 S 0.0 0.4 0:13.67 reactor_0 00:47:36.142 20:46:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:47:36.142 20:46:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:47:36.142 20:46:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:47:36.142 20:46:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:47:36.142 20:46:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:47:36.142 20:46:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:47:36.142 20:46:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:47:36.142 20:46:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:47:36.142 20:46:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:47:36.142 20:46:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 106438 1 00:47:36.142 20:46:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106438 1 idle 00:47:36.142 20:46:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106438 00:47:36.142 20:46:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:47:36.142 20:46:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:47:36.142 20:46:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:47:36.142 20:46:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:47:36.142 20:46:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:47:36.142 20:46:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:47:36.142 20:46:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:47:36.142 20:46:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:47:36.142 20:46:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:47:36.142 20:46:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106438 -w 256 00:47:36.142 20:46:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:47:36.142 20:46:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106448 root 20 0 64.2g 46208 33024 S 0.0 0.4 0:06.69 reactor_1' 00:47:36.142 20:46:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106448 root 20 0 64.2g 46208 33024 S 0.0 0.4 0:06.69 reactor_1 00:47:36.142 20:46:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:47:36.142 20:46:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:47:36.142 20:46:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:47:36.142 20:46:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:47:36.142 20:46:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:47:36.142 20:46:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:47:36.142 20:46:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:47:36.142 20:46:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:47:36.142 20:46:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid=e1b29161-2f6e-4e88-9903-c5274a848703 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:47:36.142 20:46:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:47:36.142 20:46:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:47:36.142 20:46:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:47:36.142 20:46:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:47:36.142 20:46:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:47:37.518 20:46:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:47:37.518 20:46:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:47:37.518 20:46:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:47:37.518 20:46:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:47:37.518 20:46:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:47:37.518 20:46:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:47:37.518 20:46:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:47:37.518 20:46:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 106438 0 00:47:37.518 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106438 0 idle 00:47:37.518 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106438 00:47:37.518 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:47:37.518 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:47:37.518 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:47:37.518 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:47:37.518 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:47:37.518 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:47:37.518 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:47:37.518 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:47:37.518 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:47:37.518 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106438 -w 256 00:47:37.518 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:47:37.518 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106438 root 20 0 64.2g 48384 33024 S 6.7 0.4 0:13.73 reactor_0' 00:47:37.518 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:47:37.518 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106438 root 20 0 64.2g 48384 33024 S 6.7 0.4 0:13.73 reactor_0 00:47:37.518 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:47:37.518 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:47:37.518 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:47:37.518 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:47:37.518 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:47:37.518 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:47:37.518 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:47:37.518 20:46:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:47:37.518 20:46:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 106438 1 00:47:37.518 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106438 1 idle 00:47:37.518 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106438 00:47:37.518 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:47:37.518 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:47:37.518 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:47:37.519 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:47:37.519 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:47:37.519 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:47:37.519 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:47:37.519 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:47:37.519 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:47:37.519 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106438 -w 256 00:47:37.519 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:47:37.519 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106448 root 20 0 64.2g 48384 33024 S 0.0 0.4 0:06.70 reactor_1' 00:47:37.519 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106448 root 20 0 64.2g 48384 33024 S 0.0 0.4 0:06.70 reactor_1 00:47:37.519 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:47:37.519 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:47:37.519 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:47:37.519 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:47:37.519 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:47:37.519 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:47:37.519 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:47:37.519 20:46:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:47:37.519 20:46:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:47:37.519 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:47:37.519 20:46:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:47:37.519 20:46:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:47:37.519 20:46:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:47:37.519 20:46:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:47:37.519 20:46:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:47:37.519 20:46:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:47:37.777 20:46:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:47:37.777 20:46:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:47:37.777 20:46:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:47:37.777 20:46:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # nvmfcleanup 00:47:37.777 20:46:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:47:38.036 20:46:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:47:38.036 20:46:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:47:38.036 20:46:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:47:38.036 20:46:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:47:38.036 rmmod nvme_tcp 00:47:38.036 rmmod nvme_fabrics 00:47:38.036 rmmod nvme_keyring 00:47:38.036 20:46:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:47:38.036 20:46:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:47:38.036 20:46:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:47:38.036 20:46:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@515 -- # '[' -n 106438 ']' 00:47:38.036 20:46:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # killprocess 106438 00:47:38.036 20:46:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 106438 ']' 00:47:38.036 20:46:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 106438 00:47:38.036 20:46:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:47:38.036 20:46:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:47:38.036 20:46:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 106438 00:47:38.036 20:46:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:47:38.036 20:46:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:47:38.036 killing process with pid 106438 00:47:38.036 20:46:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 106438' 00:47:38.036 20:46:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 106438 00:47:38.036 20:46:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 106438 00:47:38.296 20:46:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:47:38.296 20:46:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:47:38.296 20:46:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:47:38.296 20:46:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:47:38.296 20:46:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-save 00:47:38.296 20:46:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:47:38.296 20:46:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-restore 00:47:38.296 20:46:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:47:38.296 20:46:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:47:38.296 20:46:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:47:38.296 20:46:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:47:38.296 20:46:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:47:38.296 20:46:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:47:38.296 20:46:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:47:38.296 20:46:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:47:38.296 20:46:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:47:38.554 20:46:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:47:38.554 20:46:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:47:38.554 20:46:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:47:38.555 20:46:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:47:38.555 20:46:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:38.555 20:46:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:38.555 20:46:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@246 -- # remove_spdk_ns 00:47:38.555 20:46:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:38.555 20:46:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:47:38.555 20:46:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:38.555 20:46:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@300 -- # return 0 00:47:38.555 00:47:38.555 real 0m15.617s 00:47:38.555 user 0m27.707s 00:47:38.555 sys 0m7.752s 00:47:38.555 20:46:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:47:38.555 20:46:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:47:38.555 ************************************ 00:47:38.555 END TEST nvmf_interrupt 00:47:38.555 ************************************ 00:47:38.555 ************************************ 00:47:38.555 END TEST nvmf_tcp 00:47:38.555 ************************************ 00:47:38.555 00:47:38.555 real 20m7.260s 00:47:38.555 user 53m26.754s 00:47:38.555 sys 4m57.732s 00:47:38.555 20:46:33 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:47:38.555 20:46:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:38.555 20:46:33 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:47:38.555 20:46:33 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:47:38.555 20:46:33 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:47:38.555 20:46:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:47:38.555 20:46:33 -- common/autotest_common.sh@10 -- # set +x 00:47:38.555 ************************************ 00:47:38.555 START TEST spdkcli_nvmf_tcp 00:47:38.555 ************************************ 00:47:38.555 20:46:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:47:38.815 * Looking for test storage... 00:47:38.815 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:47:38.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:38.815 --rc genhtml_branch_coverage=1 00:47:38.815 --rc genhtml_function_coverage=1 00:47:38.815 --rc genhtml_legend=1 00:47:38.815 --rc geninfo_all_blocks=1 00:47:38.815 --rc geninfo_unexecuted_blocks=1 00:47:38.815 00:47:38.815 ' 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:47:38.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:38.815 --rc genhtml_branch_coverage=1 00:47:38.815 --rc genhtml_function_coverage=1 00:47:38.815 --rc genhtml_legend=1 00:47:38.815 --rc geninfo_all_blocks=1 00:47:38.815 --rc geninfo_unexecuted_blocks=1 00:47:38.815 00:47:38.815 ' 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:47:38.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:38.815 --rc genhtml_branch_coverage=1 00:47:38.815 --rc genhtml_function_coverage=1 00:47:38.815 --rc genhtml_legend=1 00:47:38.815 --rc geninfo_all_blocks=1 00:47:38.815 --rc geninfo_unexecuted_blocks=1 00:47:38.815 00:47:38.815 ' 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:47:38.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:38.815 --rc genhtml_branch_coverage=1 00:47:38.815 --rc genhtml_function_coverage=1 00:47:38.815 --rc genhtml_legend=1 00:47:38.815 --rc geninfo_all_blocks=1 00:47:38.815 --rc geninfo_unexecuted_blocks=1 00:47:38.815 00:47:38.815 ' 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:38.815 20:46:33 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:38.816 20:46:33 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:38.816 20:46:33 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:38.816 20:46:33 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:38.816 20:46:33 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:38.816 20:46:33 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:47:38.816 20:46:33 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:38.816 20:46:33 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:47:38.816 20:46:33 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:38.816 20:46:34 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:38.816 20:46:34 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:38.816 20:46:34 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:38.816 20:46:34 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:38.816 20:46:34 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:38.816 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:38.816 20:46:34 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:38.816 20:46:34 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:38.816 20:46:34 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:38.816 20:46:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:47:38.816 20:46:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:47:38.816 20:46:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:47:38.816 20:46:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:47:38.816 20:46:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:47:38.816 20:46:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:38.816 20:46:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:47:38.816 20:46:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=106837 00:47:38.816 20:46:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 106837 00:47:38.816 20:46:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:47:38.816 20:46:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 106837 ']' 00:47:38.816 20:46:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:38.816 20:46:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:47:38.816 20:46:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:38.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:38.816 20:46:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:47:38.816 20:46:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:38.816 [2024-10-01 20:46:34.065689] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:47:38.816 [2024-10-01 20:46:34.065790] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106837 ] 00:47:39.075 [2024-10-01 20:46:34.202248] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:47:39.076 [2024-10-01 20:46:34.280950] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:47:39.076 [2024-10-01 20:46:34.280975] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:47:39.335 20:46:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:47:39.335 20:46:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:47:39.335 20:46:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:47:39.335 20:46:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:47:39.335 20:46:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:39.335 20:46:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:47:39.335 20:46:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:47:39.335 20:46:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:47:39.335 20:46:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:47:39.335 20:46:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:39.335 20:46:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:47:39.335 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:47:39.335 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:47:39.335 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:47:39.335 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:47:39.335 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:47:39.335 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:47:39.335 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:47:39.335 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:47:39.335 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:47:39.335 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:47:39.335 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:47:39.335 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:47:39.335 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:47:39.335 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:47:39.335 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:47:39.335 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:47:39.335 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:47:39.335 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:47:39.335 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:47:39.335 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:47:39.335 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:47:39.335 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:47:39.335 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:47:39.335 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:47:39.335 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:47:39.335 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:47:39.335 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:47:39.335 ' 00:47:42.629 [2024-10-01 20:46:37.181262] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:43.565 [2024-10-01 20:46:38.510387] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:47:46.107 [2024-10-01 20:46:40.968379] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:47:48.012 [2024-10-01 20:46:43.081944] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:47:49.916 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:47:49.916 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:47:49.916 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:47:49.916 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:47:49.916 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:47:49.916 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:47:49.916 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:47:49.916 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:47:49.917 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:47:49.917 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:47:49.917 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:47:49.917 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:47:49.917 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:47:49.917 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:47:49.917 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:47:49.917 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:47:49.917 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:47:49.917 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:47:49.917 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:47:49.917 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:47:49.917 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:47:49.917 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:47:49.917 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:47:49.917 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:47:49.917 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:47:49.917 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:47:49.917 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:47:49.917 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:47:49.917 20:46:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:47:49.917 20:46:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:47:49.917 20:46:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:49.917 20:46:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:47:49.917 20:46:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:47:49.917 20:46:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:49.917 20:46:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:47:49.917 20:46:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:47:50.176 20:46:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:47:50.176 20:46:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:47:50.176 20:46:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:47:50.176 20:46:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:47:50.176 20:46:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:50.176 20:46:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:47:50.176 20:46:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:47:50.176 20:46:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:50.176 20:46:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:47:50.176 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:47:50.176 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:47:50.176 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:47:50.176 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:47:50.176 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:47:50.176 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:47:50.176 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:47:50.176 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:47:50.176 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:47:50.176 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:47:50.176 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:47:50.176 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:47:50.176 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:47:50.176 ' 00:47:56.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:47:56.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:47:56.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:47:56.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:47:56.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:47:56.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:47:56.746 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:47:56.746 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:47:56.746 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:47:56.746 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:47:56.746 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:47:56.746 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:47:56.746 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:47:56.746 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:47:56.746 20:46:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:47:56.746 20:46:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:47:56.746 20:46:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:56.746 20:46:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 106837 00:47:56.746 20:46:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 106837 ']' 00:47:56.746 20:46:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 106837 00:47:56.746 20:46:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:47:56.746 20:46:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:47:56.746 20:46:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 106837 00:47:56.746 20:46:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:47:56.746 20:46:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:47:56.746 killing process with pid 106837 00:47:56.746 20:46:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 106837' 00:47:56.746 20:46:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 106837 00:47:56.746 20:46:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 106837 00:47:56.746 20:46:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:47:56.746 20:46:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:47:56.746 20:46:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 106837 ']' 00:47:56.746 20:46:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 106837 00:47:56.746 20:46:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 106837 ']' 00:47:56.746 20:46:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 106837 00:47:56.746 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (106837) - No such process 00:47:56.746 Process with pid 106837 is not found 00:47:56.746 20:46:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 106837 is not found' 00:47:56.746 20:46:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:47:56.746 20:46:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:47:56.746 20:46:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:47:56.746 ************************************ 00:47:56.746 END TEST spdkcli_nvmf_tcp 00:47:56.746 ************************************ 00:47:56.746 00:47:56.746 real 0m17.595s 00:47:56.746 user 0m38.234s 00:47:56.746 sys 0m0.928s 00:47:56.746 20:46:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:47:56.746 20:46:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:56.746 20:46:51 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:47:56.746 20:46:51 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:47:56.746 20:46:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:47:56.746 20:46:51 -- common/autotest_common.sh@10 -- # set +x 00:47:56.746 ************************************ 00:47:56.746 START TEST nvmf_identify_passthru 00:47:56.746 ************************************ 00:47:56.746 20:46:51 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:47:56.746 * Looking for test storage... 00:47:56.746 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:47:56.746 20:46:51 nvmf_identify_passthru -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:47:56.746 20:46:51 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lcov --version 00:47:56.746 20:46:51 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:47:56.746 20:46:51 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:47:56.746 20:46:51 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:56.746 20:46:51 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:56.746 20:46:51 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:56.746 20:46:51 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:47:56.746 20:46:51 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:47:56.746 20:46:51 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:47:56.746 20:46:51 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:47:56.746 20:46:51 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:47:56.746 20:46:51 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:47:56.746 20:46:51 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:47:56.746 20:46:51 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:56.746 20:46:51 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:47:56.746 20:46:51 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:47:56.746 20:46:51 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:56.746 20:46:51 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:56.746 20:46:51 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:47:56.746 20:46:51 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:47:56.746 20:46:51 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:56.746 20:46:51 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:47:56.746 20:46:51 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:47:56.746 20:46:51 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:47:56.746 20:46:51 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:47:56.746 20:46:51 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:56.747 20:46:51 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:47:56.747 20:46:51 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:47:56.747 20:46:51 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:56.747 20:46:51 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:56.747 20:46:51 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:47:56.747 20:46:51 nvmf_identify_passthru -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:56.747 20:46:51 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:47:56.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:56.747 --rc genhtml_branch_coverage=1 00:47:56.747 --rc genhtml_function_coverage=1 00:47:56.747 --rc genhtml_legend=1 00:47:56.747 --rc geninfo_all_blocks=1 00:47:56.747 --rc geninfo_unexecuted_blocks=1 00:47:56.747 00:47:56.747 ' 00:47:56.747 20:46:51 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:47:56.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:56.747 --rc genhtml_branch_coverage=1 00:47:56.747 --rc genhtml_function_coverage=1 00:47:56.747 --rc genhtml_legend=1 00:47:56.747 --rc geninfo_all_blocks=1 00:47:56.747 --rc geninfo_unexecuted_blocks=1 00:47:56.747 00:47:56.747 ' 00:47:56.747 20:46:51 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:47:56.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:56.747 --rc genhtml_branch_coverage=1 00:47:56.747 --rc genhtml_function_coverage=1 00:47:56.747 --rc genhtml_legend=1 00:47:56.747 --rc geninfo_all_blocks=1 00:47:56.747 --rc geninfo_unexecuted_blocks=1 00:47:56.747 00:47:56.747 ' 00:47:56.747 20:46:51 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:47:56.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:56.747 --rc genhtml_branch_coverage=1 00:47:56.747 --rc genhtml_function_coverage=1 00:47:56.747 --rc genhtml_legend=1 00:47:56.747 --rc geninfo_all_blocks=1 00:47:56.747 --rc geninfo_unexecuted_blocks=1 00:47:56.747 00:47:56.747 ' 00:47:56.747 20:46:51 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:47:56.747 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:47:56.747 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:56.747 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:56.747 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:56.747 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:56.747 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:56.747 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:56.747 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:56.747 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:56.747 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:56.747 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:56.747 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:47:56.747 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:47:56.747 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:56.747 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:56.747 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:47:56.747 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:56.747 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:47:56.747 20:46:51 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:47:56.747 20:46:51 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:56.747 20:46:51 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:56.747 20:46:51 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:56.747 20:46:51 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:56.747 20:46:51 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:56.747 20:46:51 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:56.747 20:46:51 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:47:56.747 20:46:51 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:56.747 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:47:56.747 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:56.747 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:56.747 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:56.747 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:56.747 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:56.747 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:56.747 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:56.747 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:56.747 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:56.747 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:56.747 20:46:51 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:47:56.747 20:46:51 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:47:56.747 20:46:51 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:56.747 20:46:51 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:56.747 20:46:51 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:56.747 20:46:51 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:56.748 20:46:51 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:56.748 20:46:51 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:56.748 20:46:51 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:47:56.748 20:46:51 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:56.748 20:46:51 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@474 -- # prepare_net_devs 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@436 -- # local -g is_hw=no 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@438 -- # remove_spdk_ns 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:56.748 20:46:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:47:56.748 20:46:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@458 -- # nvmf_veth_init 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:47:56.748 Cannot find device "nvmf_init_br" 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:47:56.748 Cannot find device "nvmf_init_br2" 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:47:56.748 Cannot find device "nvmf_tgt_br" 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@164 -- # true 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:47:56.748 Cannot find device "nvmf_tgt_br2" 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@165 -- # true 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:47:56.748 Cannot find device "nvmf_init_br" 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@166 -- # true 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:47:56.748 Cannot find device "nvmf_init_br2" 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@167 -- # true 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:47:56.748 Cannot find device "nvmf_tgt_br" 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@168 -- # true 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:47:56.748 Cannot find device "nvmf_tgt_br2" 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@169 -- # true 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:47:56.748 Cannot find device "nvmf_br" 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@170 -- # true 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:47:56.748 Cannot find device "nvmf_init_if" 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@171 -- # true 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:47:56.748 Cannot find device "nvmf_init_if2" 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@172 -- # true 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:56.748 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@173 -- # true 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:56.748 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@174 -- # true 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:47:56.748 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:47:56.749 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:47:56.749 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:47:56.749 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:47:56.749 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:47:56.749 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:47:56.749 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:47:56.749 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:47:56.749 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:47:56.749 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:47:56.749 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:47:56.749 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:47:56.749 20:46:51 nvmf_identify_passthru -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:47:57.008 20:46:52 nvmf_identify_passthru -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:47:57.008 20:46:52 nvmf_identify_passthru -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:47:57.008 20:46:52 nvmf_identify_passthru -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:47:57.008 20:46:52 nvmf_identify_passthru -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:47:57.008 20:46:52 nvmf_identify_passthru -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:47:57.008 20:46:52 nvmf_identify_passthru -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:47:57.008 20:46:52 nvmf_identify_passthru -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:47:57.008 20:46:52 nvmf_identify_passthru -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:47:57.008 20:46:52 nvmf_identify_passthru -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:47:57.008 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:47:57.008 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:47:57.008 00:47:57.008 --- 10.0.0.3 ping statistics --- 00:47:57.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:57.008 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:47:57.008 20:46:52 nvmf_identify_passthru -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:47:57.008 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:47:57.008 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.036 ms 00:47:57.008 00:47:57.008 --- 10.0.0.4 ping statistics --- 00:47:57.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:57.008 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:47:57.008 20:46:52 nvmf_identify_passthru -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:47:57.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:57.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:47:57.008 00:47:57.008 --- 10.0.0.1 ping statistics --- 00:47:57.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:57.008 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:47:57.008 20:46:52 nvmf_identify_passthru -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:47:57.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:57.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:47:57.008 00:47:57.008 --- 10.0.0.2 ping statistics --- 00:47:57.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:57.009 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:47:57.009 20:46:52 nvmf_identify_passthru -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:57.009 20:46:52 nvmf_identify_passthru -- nvmf/common.sh@459 -- # return 0 00:47:57.009 20:46:52 nvmf_identify_passthru -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:47:57.009 20:46:52 nvmf_identify_passthru -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:57.009 20:46:52 nvmf_identify_passthru -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:47:57.009 20:46:52 nvmf_identify_passthru -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:47:57.009 20:46:52 nvmf_identify_passthru -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:57.009 20:46:52 nvmf_identify_passthru -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:47:57.009 20:46:52 nvmf_identify_passthru -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:47:57.009 20:46:52 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:47:57.009 20:46:52 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:47:57.009 20:46:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:57.009 20:46:52 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:47:57.009 20:46:52 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:47:57.009 20:46:52 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:47:57.009 20:46:52 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:47:57.009 20:46:52 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:47:57.009 20:46:52 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:47:57.009 20:46:52 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:47:57.009 20:46:52 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:47:57.009 20:46:52 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:47:57.009 20:46:52 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:47:57.009 20:46:52 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:47:57.009 20:46:52 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:47:57.009 20:46:52 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:47:57.009 20:46:52 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:47:57.009 20:46:52 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:47:57.009 20:46:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:47:57.009 20:46:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:47:57.009 20:46:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:47:57.269 20:46:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:47:57.269 20:46:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:47:57.269 20:46:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:47:57.269 20:46:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:47:57.269 20:46:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:47:57.269 20:46:52 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:47:57.269 20:46:52 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:47:57.269 20:46:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:57.528 20:46:52 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:47:57.528 20:46:52 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:47:57.528 20:46:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:57.528 20:46:52 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=107346 00:47:57.528 20:46:52 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:47:57.528 20:46:52 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:47:57.528 20:46:52 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 107346 00:47:57.528 20:46:52 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 107346 ']' 00:47:57.528 20:46:52 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:57.528 20:46:52 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:47:57.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:57.528 20:46:52 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:57.528 20:46:52 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:47:57.528 20:46:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:57.528 [2024-10-01 20:46:52.636163] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:47:57.529 [2024-10-01 20:46:52.636886] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:57.787 [2024-10-01 20:46:52.783277] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:47:57.787 [2024-10-01 20:46:52.844528] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:57.787 [2024-10-01 20:46:52.844610] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:57.787 [2024-10-01 20:46:52.844637] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:57.787 [2024-10-01 20:46:52.844644] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:57.787 [2024-10-01 20:46:52.844651] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:57.787 [2024-10-01 20:46:52.844800] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:47:57.787 [2024-10-01 20:46:52.845526] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:47:57.787 [2024-10-01 20:46:52.845692] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:47:57.787 [2024-10-01 20:46:52.845698] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:47:58.722 20:46:53 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:47:58.722 20:46:53 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:47:58.722 20:46:53 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:47:58.722 20:46:53 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:58.722 20:46:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:58.722 20:46:53 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:58.722 20:46:53 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:47:58.722 20:46:53 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:58.722 20:46:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:58.722 [2024-10-01 20:46:53.747339] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:47:58.722 20:46:53 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:58.722 20:46:53 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:47:58.722 20:46:53 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:58.722 20:46:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:58.722 [2024-10-01 20:46:53.760824] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:58.722 20:46:53 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:58.722 20:46:53 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:47:58.722 20:46:53 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:47:58.722 20:46:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:58.722 20:46:53 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:47:58.722 20:46:53 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:58.722 20:46:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:58.722 Nvme0n1 00:47:58.722 20:46:53 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:58.722 20:46:53 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:47:58.722 20:46:53 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:58.722 20:46:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:58.722 20:46:53 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:58.722 20:46:53 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:47:58.722 20:46:53 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:58.722 20:46:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:58.722 20:46:53 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:58.722 20:46:53 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:47:58.722 20:46:53 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:58.722 20:46:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:58.722 [2024-10-01 20:46:53.899625] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:47:58.722 20:46:53 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:58.722 20:46:53 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:47:58.722 20:46:53 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:58.722 20:46:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:58.722 [ 00:47:58.722 { 00:47:58.722 "allow_any_host": true, 00:47:58.722 "hosts": [], 00:47:58.722 "listen_addresses": [], 00:47:58.722 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:47:58.722 "subtype": "Discovery" 00:47:58.722 }, 00:47:58.722 { 00:47:58.722 "allow_any_host": true, 00:47:58.722 "hosts": [], 00:47:58.722 "listen_addresses": [ 00:47:58.722 { 00:47:58.722 "adrfam": "IPv4", 00:47:58.722 "traddr": "10.0.0.3", 00:47:58.722 "trsvcid": "4420", 00:47:58.722 "trtype": "TCP" 00:47:58.722 } 00:47:58.722 ], 00:47:58.722 "max_cntlid": 65519, 00:47:58.722 "max_namespaces": 1, 00:47:58.722 "min_cntlid": 1, 00:47:58.722 "model_number": "SPDK bdev Controller", 00:47:58.722 "namespaces": [ 00:47:58.722 { 00:47:58.722 "bdev_name": "Nvme0n1", 00:47:58.722 "name": "Nvme0n1", 00:47:58.722 "nguid": "5EB01AC8FD304BE697B579C2CC78D019", 00:47:58.722 "nsid": 1, 00:47:58.722 "uuid": "5eb01ac8-fd30-4be6-97b5-79c2cc78d019" 00:47:58.722 } 00:47:58.722 ], 00:47:58.722 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:47:58.722 "serial_number": "SPDK00000000000001", 00:47:58.722 "subtype": "NVMe" 00:47:58.722 } 00:47:58.722 ] 00:47:58.722 20:46:53 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:58.722 20:46:53 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:47:58.722 20:46:53 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:47:58.722 20:46:53 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:47:58.981 20:46:54 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:47:58.981 20:46:54 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:47:58.981 20:46:54 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:47:58.981 20:46:54 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:47:59.239 20:46:54 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:47:59.239 20:46:54 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:47:59.239 20:46:54 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:47:59.239 20:46:54 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:47:59.239 20:46:54 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:59.239 20:46:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:59.239 20:46:54 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:59.239 20:46:54 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:47:59.239 20:46:54 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:47:59.239 20:46:54 nvmf_identify_passthru -- nvmf/common.sh@514 -- # nvmfcleanup 00:47:59.239 20:46:54 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:47:59.239 20:46:54 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:47:59.239 20:46:54 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:47:59.239 20:46:54 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:47:59.239 20:46:54 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:47:59.239 rmmod nvme_tcp 00:47:59.239 rmmod nvme_fabrics 00:47:59.239 rmmod nvme_keyring 00:47:59.497 20:46:54 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:47:59.497 20:46:54 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:47:59.497 20:46:54 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:47:59.497 20:46:54 nvmf_identify_passthru -- nvmf/common.sh@515 -- # '[' -n 107346 ']' 00:47:59.497 20:46:54 nvmf_identify_passthru -- nvmf/common.sh@516 -- # killprocess 107346 00:47:59.497 20:46:54 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 107346 ']' 00:47:59.497 20:46:54 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 107346 00:47:59.497 20:46:54 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:47:59.497 20:46:54 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:47:59.497 20:46:54 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 107346 00:47:59.497 20:46:54 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:47:59.498 20:46:54 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:47:59.498 killing process with pid 107346 00:47:59.498 20:46:54 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 107346' 00:47:59.498 20:46:54 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 107346 00:47:59.498 20:46:54 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 107346 00:47:59.756 20:46:54 nvmf_identify_passthru -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:47:59.756 20:46:54 nvmf_identify_passthru -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:47:59.756 20:46:54 nvmf_identify_passthru -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:47:59.756 20:46:54 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:47:59.756 20:46:54 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-save 00:47:59.756 20:46:54 nvmf_identify_passthru -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:47:59.756 20:46:54 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-restore 00:47:59.756 20:46:54 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:47:59.756 20:46:54 nvmf_identify_passthru -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:47:59.756 20:46:54 nvmf_identify_passthru -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:47:59.756 20:46:54 nvmf_identify_passthru -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:47:59.756 20:46:54 nvmf_identify_passthru -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:47:59.756 20:46:54 nvmf_identify_passthru -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:47:59.756 20:46:54 nvmf_identify_passthru -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:47:59.756 20:46:54 nvmf_identify_passthru -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:47:59.756 20:46:54 nvmf_identify_passthru -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:47:59.756 20:46:54 nvmf_identify_passthru -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:47:59.756 20:46:54 nvmf_identify_passthru -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:47:59.756 20:46:54 nvmf_identify_passthru -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:47:59.756 20:46:54 nvmf_identify_passthru -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:47:59.756 20:46:54 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:59.756 20:46:54 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:59.756 20:46:54 nvmf_identify_passthru -- nvmf/common.sh@246 -- # remove_spdk_ns 00:47:59.756 20:46:54 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:59.756 20:46:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:47:59.756 20:46:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:00.015 20:46:55 nvmf_identify_passthru -- nvmf/common.sh@300 -- # return 0 00:48:00.015 00:48:00.015 real 0m3.599s 00:48:00.015 user 0m8.210s 00:48:00.015 sys 0m0.962s 00:48:00.015 20:46:55 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:48:00.015 20:46:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:48:00.015 ************************************ 00:48:00.015 END TEST nvmf_identify_passthru 00:48:00.015 ************************************ 00:48:00.015 20:46:55 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:48:00.015 20:46:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:48:00.015 20:46:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:48:00.015 20:46:55 -- common/autotest_common.sh@10 -- # set +x 00:48:00.015 ************************************ 00:48:00.015 START TEST nvmf_dif 00:48:00.015 ************************************ 00:48:00.015 20:46:55 nvmf_dif -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:48:00.015 * Looking for test storage... 00:48:00.015 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:48:00.015 20:46:55 nvmf_dif -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:48:00.015 20:46:55 nvmf_dif -- common/autotest_common.sh@1681 -- # lcov --version 00:48:00.015 20:46:55 nvmf_dif -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:48:00.015 20:46:55 nvmf_dif -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:48:00.015 20:46:55 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:48:00.015 20:46:55 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:48:00.015 20:46:55 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:48:00.015 20:46:55 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:48:00.015 20:46:55 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:48:00.015 20:46:55 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:48:00.015 20:46:55 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:48:00.015 20:46:55 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:48:00.015 20:46:55 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:48:00.015 20:46:55 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:48:00.015 20:46:55 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:48:00.015 20:46:55 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:48:00.015 20:46:55 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:48:00.015 20:46:55 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:48:00.015 20:46:55 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:48:00.015 20:46:55 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:48:00.015 20:46:55 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:48:00.015 20:46:55 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:48:00.015 20:46:55 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:48:00.015 20:46:55 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:48:00.015 20:46:55 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:48:00.015 20:46:55 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:48:00.015 20:46:55 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:48:00.015 20:46:55 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:48:00.015 20:46:55 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:48:00.015 20:46:55 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:48:00.015 20:46:55 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:48:00.015 20:46:55 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:48:00.015 20:46:55 nvmf_dif -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:48:00.015 20:46:55 nvmf_dif -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:48:00.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:00.015 --rc genhtml_branch_coverage=1 00:48:00.015 --rc genhtml_function_coverage=1 00:48:00.015 --rc genhtml_legend=1 00:48:00.015 --rc geninfo_all_blocks=1 00:48:00.015 --rc geninfo_unexecuted_blocks=1 00:48:00.015 00:48:00.015 ' 00:48:00.015 20:46:55 nvmf_dif -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:48:00.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:00.015 --rc genhtml_branch_coverage=1 00:48:00.015 --rc genhtml_function_coverage=1 00:48:00.015 --rc genhtml_legend=1 00:48:00.015 --rc geninfo_all_blocks=1 00:48:00.015 --rc geninfo_unexecuted_blocks=1 00:48:00.015 00:48:00.015 ' 00:48:00.015 20:46:55 nvmf_dif -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:48:00.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:00.015 --rc genhtml_branch_coverage=1 00:48:00.015 --rc genhtml_function_coverage=1 00:48:00.015 --rc genhtml_legend=1 00:48:00.015 --rc geninfo_all_blocks=1 00:48:00.015 --rc geninfo_unexecuted_blocks=1 00:48:00.015 00:48:00.015 ' 00:48:00.015 20:46:55 nvmf_dif -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:48:00.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:00.015 --rc genhtml_branch_coverage=1 00:48:00.015 --rc genhtml_function_coverage=1 00:48:00.015 --rc genhtml_legend=1 00:48:00.015 --rc geninfo_all_blocks=1 00:48:00.015 --rc geninfo_unexecuted_blocks=1 00:48:00.015 00:48:00.015 ' 00:48:00.015 20:46:55 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:48:00.015 20:46:55 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:48:00.015 20:46:55 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:48:00.015 20:46:55 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:48:00.015 20:46:55 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:48:00.015 20:46:55 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:48:00.015 20:46:55 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:48:00.015 20:46:55 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:48:00.015 20:46:55 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:48:00.015 20:46:55 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:48:00.015 20:46:55 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:48:00.015 20:46:55 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:48:00.274 20:46:55 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:48:00.274 20:46:55 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:48:00.274 20:46:55 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:48:00.274 20:46:55 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:48:00.274 20:46:55 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:48:00.274 20:46:55 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:48:00.274 20:46:55 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:48:00.274 20:46:55 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:48:00.274 20:46:55 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:48:00.274 20:46:55 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:48:00.274 20:46:55 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:48:00.274 20:46:55 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:00.274 20:46:55 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:00.274 20:46:55 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:00.274 20:46:55 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:48:00.274 20:46:55 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:00.274 20:46:55 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:48:00.274 20:46:55 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:48:00.274 20:46:55 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:48:00.274 20:46:55 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:48:00.274 20:46:55 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:48:00.274 20:46:55 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:48:00.274 20:46:55 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:48:00.274 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:48:00.274 20:46:55 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:48:00.274 20:46:55 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:48:00.274 20:46:55 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:48:00.274 20:46:55 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:48:00.274 20:46:55 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:48:00.274 20:46:55 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:48:00.274 20:46:55 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:48:00.274 20:46:55 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:48:00.274 20:46:55 nvmf_dif -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:48:00.274 20:46:55 nvmf_dif -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:48:00.274 20:46:55 nvmf_dif -- nvmf/common.sh@474 -- # prepare_net_devs 00:48:00.274 20:46:55 nvmf_dif -- nvmf/common.sh@436 -- # local -g is_hw=no 00:48:00.274 20:46:55 nvmf_dif -- nvmf/common.sh@438 -- # remove_spdk_ns 00:48:00.274 20:46:55 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:00.274 20:46:55 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:48:00.274 20:46:55 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:00.274 20:46:55 nvmf_dif -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:48:00.274 20:46:55 nvmf_dif -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:48:00.274 20:46:55 nvmf_dif -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:48:00.274 20:46:55 nvmf_dif -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:48:00.274 20:46:55 nvmf_dif -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:48:00.274 20:46:55 nvmf_dif -- nvmf/common.sh@458 -- # nvmf_veth_init 00:48:00.274 20:46:55 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:48:00.275 Cannot find device "nvmf_init_br" 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@162 -- # true 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:48:00.275 Cannot find device "nvmf_init_br2" 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@163 -- # true 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:48:00.275 Cannot find device "nvmf_tgt_br" 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@164 -- # true 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:48:00.275 Cannot find device "nvmf_tgt_br2" 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@165 -- # true 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:48:00.275 Cannot find device "nvmf_init_br" 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@166 -- # true 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:48:00.275 Cannot find device "nvmf_init_br2" 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@167 -- # true 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:48:00.275 Cannot find device "nvmf_tgt_br" 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@168 -- # true 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:48:00.275 Cannot find device "nvmf_tgt_br2" 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@169 -- # true 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:48:00.275 Cannot find device "nvmf_br" 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@170 -- # true 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:48:00.275 Cannot find device "nvmf_init_if" 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@171 -- # true 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:48:00.275 Cannot find device "nvmf_init_if2" 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@172 -- # true 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:48:00.275 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@173 -- # true 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:48:00.275 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@174 -- # true 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:48:00.275 20:46:55 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:48:00.533 20:46:55 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:48:00.533 20:46:55 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:48:00.533 20:46:55 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:48:00.533 20:46:55 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:48:00.533 20:46:55 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:48:00.533 20:46:55 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:48:00.533 20:46:55 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:48:00.533 20:46:55 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:48:00.533 20:46:55 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:48:00.533 20:46:55 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:48:00.533 20:46:55 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:48:00.533 20:46:55 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:48:00.533 20:46:55 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:48:00.533 20:46:55 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:48:00.533 20:46:55 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:48:00.533 20:46:55 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:48:00.533 20:46:55 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:48:00.533 20:46:55 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:48:00.533 20:46:55 nvmf_dif -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:48:00.533 20:46:55 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:48:00.533 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:48:00.533 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.100 ms 00:48:00.533 00:48:00.533 --- 10.0.0.3 ping statistics --- 00:48:00.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:00.533 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:48:00.533 20:46:55 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:48:00.533 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:48:00.533 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:48:00.533 00:48:00.533 --- 10.0.0.4 ping statistics --- 00:48:00.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:00.533 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:48:00.533 20:46:55 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:48:00.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:48:00.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:48:00.533 00:48:00.533 --- 10.0.0.1 ping statistics --- 00:48:00.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:00.533 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:48:00.533 20:46:55 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:48:00.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:48:00.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:48:00.533 00:48:00.533 --- 10.0.0.2 ping statistics --- 00:48:00.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:00.533 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:48:00.533 20:46:55 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:48:00.533 20:46:55 nvmf_dif -- nvmf/common.sh@459 -- # return 0 00:48:00.533 20:46:55 nvmf_dif -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:48:00.533 20:46:55 nvmf_dif -- nvmf/common.sh@477 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:48:00.792 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:48:00.792 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:48:00.792 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:48:00.792 20:46:56 nvmf_dif -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:48:00.792 20:46:56 nvmf_dif -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:48:00.792 20:46:56 nvmf_dif -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:48:00.792 20:46:56 nvmf_dif -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:48:00.792 20:46:56 nvmf_dif -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:48:00.792 20:46:56 nvmf_dif -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:48:01.050 20:46:56 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:48:01.050 20:46:56 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:48:01.050 20:46:56 nvmf_dif -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:48:01.050 20:46:56 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:48:01.050 20:46:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:48:01.050 20:46:56 nvmf_dif -- nvmf/common.sh@507 -- # nvmfpid=107749 00:48:01.050 20:46:56 nvmf_dif -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:48:01.050 20:46:56 nvmf_dif -- nvmf/common.sh@508 -- # waitforlisten 107749 00:48:01.050 20:46:56 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 107749 ']' 00:48:01.050 20:46:56 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:01.050 20:46:56 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:48:01.050 20:46:56 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:01.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:01.050 20:46:56 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:48:01.050 20:46:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:48:01.050 [2024-10-01 20:46:56.123953] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:48:01.050 [2024-10-01 20:46:56.124063] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:48:01.050 [2024-10-01 20:46:56.266872] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:01.309 [2024-10-01 20:46:56.336674] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:48:01.309 [2024-10-01 20:46:56.336748] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:48:01.309 [2024-10-01 20:46:56.336762] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:48:01.309 [2024-10-01 20:46:56.336772] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:48:01.309 [2024-10-01 20:46:56.336781] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:48:01.309 [2024-10-01 20:46:56.336822] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:48:01.309 20:46:56 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:48:01.309 20:46:56 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:48:01.309 20:46:56 nvmf_dif -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:48:01.309 20:46:56 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:48:01.309 20:46:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:48:01.309 20:46:56 nvmf_dif -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:48:01.309 20:46:56 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:48:01.309 20:46:56 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:48:01.309 20:46:56 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:01.309 20:46:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:48:01.310 [2024-10-01 20:46:56.523081] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:01.310 20:46:56 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:01.310 20:46:56 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:48:01.310 20:46:56 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:48:01.310 20:46:56 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:48:01.310 20:46:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:48:01.310 ************************************ 00:48:01.310 START TEST fio_dif_1_default 00:48:01.310 ************************************ 00:48:01.310 20:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:48:01.310 20:46:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:48:01.310 20:46:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:48:01.310 20:46:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:48:01.310 20:46:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:48:01.310 20:46:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:48:01.310 20:46:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:48:01.310 20:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:01.310 20:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:48:01.310 bdev_null0 00:48:01.310 20:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:01.310 20:46:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:48:01.310 20:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:01.310 20:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:48:01.310 20:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:01.310 20:46:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:48:01.310 20:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:01.310 20:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:48:01.569 [2024-10-01 20:46:56.567277] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config=() 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # local subsystem config 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:48:01.569 { 00:48:01.569 "params": { 00:48:01.569 "name": "Nvme$subsystem", 00:48:01.569 "trtype": "$TEST_TRANSPORT", 00:48:01.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:01.569 "adrfam": "ipv4", 00:48:01.569 "trsvcid": "$NVMF_PORT", 00:48:01.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:01.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:01.569 "hdgst": ${hdgst:-false}, 00:48:01.569 "ddgst": ${ddgst:-false} 00:48:01.569 }, 00:48:01.569 "method": "bdev_nvme_attach_controller" 00:48:01.569 } 00:48:01.569 EOF 00:48:01.569 )") 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # cat 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # jq . 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@583 -- # IFS=, 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:48:01.569 "params": { 00:48:01.569 "name": "Nvme0", 00:48:01.569 "trtype": "tcp", 00:48:01.569 "traddr": "10.0.0.3", 00:48:01.569 "adrfam": "ipv4", 00:48:01.569 "trsvcid": "4420", 00:48:01.569 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:01.569 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:01.569 "hdgst": false, 00:48:01.569 "ddgst": false 00:48:01.569 }, 00:48:01.569 "method": "bdev_nvme_attach_controller" 00:48:01.569 }' 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:48:01.569 20:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:01.569 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:48:01.569 fio-3.35 00:48:01.569 Starting 1 thread 00:48:13.763 00:48:13.763 filename0: (groupid=0, jobs=1): err= 0: pid=107816: Tue Oct 1 20:47:07 2024 00:48:13.763 read: IOPS=253, BW=1015KiB/s (1040kB/s)(9.94MiB/10022msec) 00:48:13.763 slat (nsec): min=6568, max=60822, avg=10898.24, stdev=7290.35 00:48:13.763 clat (usec): min=455, max=41497, avg=15721.65, stdev=19668.44 00:48:13.763 lat (usec): min=463, max=41508, avg=15732.55, stdev=19668.17 00:48:13.763 clat percentiles (usec): 00:48:13.763 | 1.00th=[ 461], 5.00th=[ 474], 10.00th=[ 482], 20.00th=[ 498], 00:48:13.763 | 30.00th=[ 506], 40.00th=[ 515], 50.00th=[ 529], 60.00th=[ 545], 00:48:13.763 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:48:13.763 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:48:13.763 | 99.99th=[41681] 00:48:13.763 bw ( KiB/s): min= 576, max= 1440, per=100.00%, avg=1016.00, stdev=226.96, samples=20 00:48:13.763 iops : min= 144, max= 360, avg=254.00, stdev=56.74, samples=20 00:48:13.763 lat (usec) : 500=23.62%, 750=38.80% 00:48:13.763 lat (msec) : 10=0.16%, 50=37.42% 00:48:13.763 cpu : usr=91.56%, sys=7.93%, ctx=32, majf=0, minf=9 00:48:13.763 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:13.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:13.763 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:13.763 issued rwts: total=2544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:13.763 latency : target=0, window=0, percentile=100.00%, depth=4 00:48:13.763 00:48:13.763 Run status group 0 (all jobs): 00:48:13.763 READ: bw=1015KiB/s (1040kB/s), 1015KiB/s-1015KiB/s (1040kB/s-1040kB/s), io=9.94MiB (10.4MB), run=10022-10022msec 00:48:13.763 20:47:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:48:13.763 20:47:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:48:13.763 20:47:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:48:13.763 20:47:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:48:13.763 20:47:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:48:13.763 20:47:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:13.764 00:48:13.764 real 0m11.088s 00:48:13.764 user 0m9.842s 00:48:13.764 sys 0m1.073s 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:48:13.764 ************************************ 00:48:13.764 END TEST fio_dif_1_default 00:48:13.764 ************************************ 00:48:13.764 20:47:07 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:48:13.764 20:47:07 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:48:13.764 20:47:07 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:48:13.764 20:47:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:48:13.764 ************************************ 00:48:13.764 START TEST fio_dif_1_multi_subsystems 00:48:13.764 ************************************ 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:13.764 bdev_null0 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:13.764 [2024-10-01 20:47:07.703601] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:13.764 bdev_null1 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config=() 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # local subsystem config 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:48:13.764 { 00:48:13.764 "params": { 00:48:13.764 "name": "Nvme$subsystem", 00:48:13.764 "trtype": "$TEST_TRANSPORT", 00:48:13.764 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:13.764 "adrfam": "ipv4", 00:48:13.764 "trsvcid": "$NVMF_PORT", 00:48:13.764 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:13.764 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:13.764 "hdgst": ${hdgst:-false}, 00:48:13.764 "ddgst": ${ddgst:-false} 00:48:13.764 }, 00:48:13.764 "method": "bdev_nvme_attach_controller" 00:48:13.764 } 00:48:13.764 EOF 00:48:13.764 )") 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:48:13.764 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:48:13.764 { 00:48:13.764 "params": { 00:48:13.764 "name": "Nvme$subsystem", 00:48:13.764 "trtype": "$TEST_TRANSPORT", 00:48:13.764 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:13.764 "adrfam": "ipv4", 00:48:13.764 "trsvcid": "$NVMF_PORT", 00:48:13.764 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:13.765 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:13.765 "hdgst": ${hdgst:-false}, 00:48:13.765 "ddgst": ${ddgst:-false} 00:48:13.765 }, 00:48:13.765 "method": "bdev_nvme_attach_controller" 00:48:13.765 } 00:48:13.765 EOF 00:48:13.765 )") 00:48:13.765 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:48:13.765 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:48:13.765 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:48:13.765 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # jq . 00:48:13.765 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@583 -- # IFS=, 00:48:13.765 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:48:13.765 "params": { 00:48:13.765 "name": "Nvme0", 00:48:13.765 "trtype": "tcp", 00:48:13.765 "traddr": "10.0.0.3", 00:48:13.765 "adrfam": "ipv4", 00:48:13.765 "trsvcid": "4420", 00:48:13.765 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:13.765 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:13.765 "hdgst": false, 00:48:13.765 "ddgst": false 00:48:13.765 }, 00:48:13.765 "method": "bdev_nvme_attach_controller" 00:48:13.765 },{ 00:48:13.765 "params": { 00:48:13.765 "name": "Nvme1", 00:48:13.765 "trtype": "tcp", 00:48:13.765 "traddr": "10.0.0.3", 00:48:13.765 "adrfam": "ipv4", 00:48:13.765 "trsvcid": "4420", 00:48:13.765 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:48:13.765 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:48:13.765 "hdgst": false, 00:48:13.765 "ddgst": false 00:48:13.765 }, 00:48:13.765 "method": "bdev_nvme_attach_controller" 00:48:13.765 }' 00:48:13.765 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:48:13.765 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:48:13.765 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:48:13.765 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:48:13.765 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:48:13.765 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:48:13.765 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:48:13.765 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:48:13.765 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:48:13.765 20:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:13.765 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:48:13.765 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:48:13.765 fio-3.35 00:48:13.765 Starting 2 threads 00:48:23.734 00:48:23.734 filename0: (groupid=0, jobs=1): err= 0: pid=107976: Tue Oct 1 20:47:18 2024 00:48:23.734 read: IOPS=153, BW=615KiB/s (630kB/s)(6176KiB/10038msec) 00:48:23.734 slat (nsec): min=7999, max=65044, avg=12767.01, stdev=8336.55 00:48:23.734 clat (usec): min=449, max=42105, avg=25961.30, stdev=19588.86 00:48:23.734 lat (usec): min=457, max=42131, avg=25974.06, stdev=19588.17 00:48:23.734 clat percentiles (usec): 00:48:23.734 | 1.00th=[ 478], 5.00th=[ 498], 10.00th=[ 510], 20.00th=[ 529], 00:48:23.734 | 30.00th=[ 570], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:48:23.734 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:48:23.734 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:48:23.734 | 99.99th=[42206] 00:48:23.734 bw ( KiB/s): min= 416, max= 1760, per=46.10%, avg=616.00, stdev=284.68, samples=20 00:48:23.734 iops : min= 104, max= 440, avg=154.00, stdev=71.17, samples=20 00:48:23.734 lat (usec) : 500=6.02%, 750=27.91%, 1000=3.11% 00:48:23.734 lat (msec) : 4=0.26%, 50=62.69% 00:48:23.734 cpu : usr=95.06%, sys=4.39%, ctx=40, majf=0, minf=9 00:48:23.734 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:23.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:23.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:23.734 issued rwts: total=1544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:23.734 latency : target=0, window=0, percentile=100.00%, depth=4 00:48:23.734 filename1: (groupid=0, jobs=1): err= 0: pid=107977: Tue Oct 1 20:47:18 2024 00:48:23.734 read: IOPS=179, BW=719KiB/s (736kB/s)(7216KiB/10038msec) 00:48:23.734 slat (nsec): min=6533, max=62493, avg=13274.30, stdev=9230.11 00:48:23.734 clat (usec): min=453, max=41867, avg=22210.23, stdev=20194.04 00:48:23.734 lat (usec): min=461, max=41902, avg=22223.50, stdev=20193.85 00:48:23.735 clat percentiles (usec): 00:48:23.735 | 1.00th=[ 482], 5.00th=[ 506], 10.00th=[ 519], 20.00th=[ 537], 00:48:23.735 | 30.00th=[ 553], 40.00th=[ 635], 50.00th=[40633], 60.00th=[41157], 00:48:23.735 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:48:23.735 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:48:23.735 | 99.99th=[41681] 00:48:23.735 bw ( KiB/s): min= 416, max= 1824, per=53.97%, avg=720.00, stdev=392.12, samples=20 00:48:23.735 iops : min= 104, max= 456, avg=180.00, stdev=98.03, samples=20 00:48:23.735 lat (usec) : 500=3.49%, 750=40.41%, 1000=2.38% 00:48:23.735 lat (msec) : 2=0.06%, 4=0.22%, 50=53.44% 00:48:23.735 cpu : usr=94.96%, sys=4.45%, ctx=13, majf=0, minf=0 00:48:23.735 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:23.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:23.735 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:23.735 issued rwts: total=1804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:23.735 latency : target=0, window=0, percentile=100.00%, depth=4 00:48:23.735 00:48:23.735 Run status group 0 (all jobs): 00:48:23.735 READ: bw=1334KiB/s (1366kB/s), 615KiB/s-719KiB/s (630kB/s-736kB/s), io=13.1MiB (13.7MB), run=10038-10038msec 00:48:23.735 20:47:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:48:23.735 20:47:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:48:23.735 20:47:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:48:23.735 20:47:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:48:23.735 20:47:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:48:23.735 20:47:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:48:23.735 20:47:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:23.735 20:47:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:23.735 20:47:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:23.735 20:47:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:48:23.735 20:47:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:23.735 20:47:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:23.735 20:47:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:23.735 20:47:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:48:23.735 20:47:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:48:23.735 20:47:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:48:23.735 20:47:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:48:23.735 20:47:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:23.735 20:47:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:23.735 20:47:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:23.735 20:47:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:48:23.735 20:47:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:23.735 20:47:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:23.735 20:47:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:23.735 00:48:23.735 real 0m11.262s 00:48:23.735 user 0m19.942s 00:48:23.735 sys 0m1.188s 00:48:23.735 20:47:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:48:23.735 20:47:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:23.735 ************************************ 00:48:23.735 END TEST fio_dif_1_multi_subsystems 00:48:23.735 ************************************ 00:48:23.735 20:47:18 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:48:23.735 20:47:18 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:48:23.735 20:47:18 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:48:23.735 20:47:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:48:23.735 ************************************ 00:48:23.735 START TEST fio_dif_rand_params 00:48:23.735 ************************************ 00:48:23.735 20:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:48:23.735 20:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:48:23.735 20:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:48:23.735 20:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:48:23.735 20:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:48:23.735 20:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:48:23.735 20:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:48:23.735 20:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:48:23.735 20:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:48:23.735 20:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:48:23.735 20:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:48:23.735 20:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:48:23.735 20:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:48:23.735 20:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:48:23.735 20:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:23.735 20:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:23.992 bdev_null0 00:48:23.992 20:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:23.992 20:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:48:23.992 20:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:23.992 20:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:23.992 20:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:23.992 20:47:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:48:23.992 20:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:23.992 20:47:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:23.992 20:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:23.992 20:47:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:48:23.992 20:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:23.992 20:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:23.992 [2024-10-01 20:47:19.007895] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:48:23.992 20:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:23.992 20:47:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:48:23.992 20:47:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:48:23.992 20:47:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:48:23.992 20:47:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:48:23.992 20:47:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:48:23.992 20:47:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:23.992 20:47:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:48:23.992 20:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:23.992 20:47:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:48:23.992 20:47:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:48:23.992 { 00:48:23.992 "params": { 00:48:23.992 "name": "Nvme$subsystem", 00:48:23.992 "trtype": "$TEST_TRANSPORT", 00:48:23.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:23.992 "adrfam": "ipv4", 00:48:23.992 "trsvcid": "$NVMF_PORT", 00:48:23.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:23.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:23.992 "hdgst": ${hdgst:-false}, 00:48:23.992 "ddgst": ${ddgst:-false} 00:48:23.992 }, 00:48:23.992 "method": "bdev_nvme_attach_controller" 00:48:23.992 } 00:48:23.992 EOF 00:48:23.992 )") 00:48:23.992 20:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:48:23.992 20:47:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:48:23.992 20:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:48:23.992 20:47:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:48:23.992 20:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:48:23.992 20:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:48:23.992 20:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:48:23.992 20:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:48:23.992 20:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:48:23.992 20:47:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:48:23.992 20:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:48:23.992 20:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:48:23.992 20:47:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:48:23.992 20:47:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:48:23.992 20:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:48:23.992 20:47:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:48:23.992 20:47:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:48:23.992 20:47:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:48:23.992 "params": { 00:48:23.992 "name": "Nvme0", 00:48:23.992 "trtype": "tcp", 00:48:23.992 "traddr": "10.0.0.3", 00:48:23.992 "adrfam": "ipv4", 00:48:23.992 "trsvcid": "4420", 00:48:23.992 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:23.993 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:23.993 "hdgst": false, 00:48:23.993 "ddgst": false 00:48:23.993 }, 00:48:23.993 "method": "bdev_nvme_attach_controller" 00:48:23.993 }' 00:48:23.993 20:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:48:23.993 20:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:48:23.993 20:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:48:23.993 20:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:48:23.993 20:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:48:23.993 20:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:48:23.993 20:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:48:23.993 20:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:48:23.993 20:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:48:23.993 20:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:24.251 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:48:24.251 ... 00:48:24.251 fio-3.35 00:48:24.251 Starting 3 threads 00:48:30.809 00:48:30.809 filename0: (groupid=0, jobs=1): err= 0: pid=108128: Tue Oct 1 20:47:24 2024 00:48:30.809 read: IOPS=225, BW=28.2MiB/s (29.5MB/s)(141MiB/5004msec) 00:48:30.809 slat (nsec): min=5818, max=54913, avg=14417.47, stdev=5281.64 00:48:30.809 clat (usec): min=6293, max=55775, avg=13286.33, stdev=6940.78 00:48:30.809 lat (usec): min=6305, max=55786, avg=13300.75, stdev=6940.74 00:48:30.809 clat percentiles (usec): 00:48:30.809 | 1.00th=[ 6849], 5.00th=[ 7635], 10.00th=[ 8029], 20.00th=[10814], 00:48:30.809 | 30.00th=[11731], 40.00th=[12125], 50.00th=[12649], 60.00th=[12911], 00:48:30.809 | 70.00th=[13435], 80.00th=[14091], 90.00th=[15664], 95.00th=[16450], 00:48:30.809 | 99.00th=[53216], 99.50th=[53216], 99.90th=[55313], 99.95th=[55837], 00:48:30.809 | 99.99th=[55837] 00:48:30.809 bw ( KiB/s): min=21504, max=34560, per=33.02%, avg=28800.00, stdev=3764.83, samples=10 00:48:30.809 iops : min= 168, max= 270, avg=225.00, stdev=29.41, samples=10 00:48:30.809 lat (msec) : 10=17.46%, 20=79.88%, 50=0.27%, 100=2.39% 00:48:30.809 cpu : usr=93.08%, sys=5.50%, ctx=16, majf=0, minf=0 00:48:30.809 IO depths : 1=3.0%, 2=97.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:30.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:30.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:30.809 issued rwts: total=1128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:30.809 latency : target=0, window=0, percentile=100.00%, depth=3 00:48:30.809 filename0: (groupid=0, jobs=1): err= 0: pid=108129: Tue Oct 1 20:47:24 2024 00:48:30.809 read: IOPS=238, BW=29.9MiB/s (31.3MB/s)(149MiB/5004msec) 00:48:30.809 slat (nsec): min=5062, max=56744, avg=14450.07, stdev=4876.39 00:48:30.809 clat (usec): min=6399, max=53780, avg=12538.40, stdev=7628.23 00:48:30.809 lat (usec): min=6411, max=53788, avg=12552.85, stdev=7628.22 00:48:30.809 clat percentiles (usec): 00:48:30.809 | 1.00th=[ 6849], 5.00th=[ 7963], 10.00th=[ 8717], 20.00th=[10028], 00:48:30.809 | 30.00th=[10421], 40.00th=[10683], 50.00th=[11076], 60.00th=[11338], 00:48:30.809 | 70.00th=[11731], 80.00th=[12649], 90.00th=[14222], 95.00th=[15270], 00:48:30.809 | 99.00th=[52691], 99.50th=[52691], 99.90th=[53216], 99.95th=[53740], 00:48:30.809 | 99.99th=[53740] 00:48:30.809 bw ( KiB/s): min=25344, max=34048, per=34.83%, avg=30378.67, stdev=3340.29, samples=9 00:48:30.809 iops : min= 198, max= 266, avg=237.33, stdev=26.10, samples=9 00:48:30.809 lat (msec) : 10=18.83%, 20=77.66%, 50=0.67%, 100=2.85% 00:48:30.809 cpu : usr=92.76%, sys=5.72%, ctx=12, majf=0, minf=0 00:48:30.809 IO depths : 1=2.8%, 2=97.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:30.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:30.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:30.809 issued rwts: total=1195,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:30.809 latency : target=0, window=0, percentile=100.00%, depth=3 00:48:30.809 filename0: (groupid=0, jobs=1): err= 0: pid=108130: Tue Oct 1 20:47:24 2024 00:48:30.809 read: IOPS=217, BW=27.2MiB/s (28.5MB/s)(136MiB/5004msec) 00:48:30.809 slat (nsec): min=7932, max=64151, avg=13010.87, stdev=4920.29 00:48:30.809 clat (usec): min=4277, max=21671, avg=13784.29, stdev=3437.45 00:48:30.809 lat (usec): min=4285, max=21680, avg=13797.30, stdev=3437.30 00:48:30.809 clat percentiles (usec): 00:48:30.809 | 1.00th=[ 4359], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9896], 00:48:30.809 | 30.00th=[12518], 40.00th=[14222], 50.00th=[14615], 60.00th=[15139], 00:48:30.809 | 70.00th=[15533], 80.00th=[15926], 90.00th=[17957], 95.00th=[19268], 00:48:30.809 | 99.00th=[20317], 99.50th=[20579], 99.90th=[21627], 99.95th=[21627], 00:48:30.809 | 99.99th=[21627] 00:48:30.809 bw ( KiB/s): min=23808, max=32512, per=31.47%, avg=27448.89, stdev=3445.98, samples=9 00:48:30.809 iops : min= 186, max= 254, avg=214.44, stdev=26.92, samples=9 00:48:30.809 lat (msec) : 10=20.98%, 20=77.28%, 50=1.75% 00:48:30.809 cpu : usr=92.94%, sys=5.72%, ctx=14, majf=0, minf=0 00:48:30.810 IO depths : 1=13.2%, 2=86.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:30.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:30.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:30.810 issued rwts: total=1087,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:30.810 latency : target=0, window=0, percentile=100.00%, depth=3 00:48:30.810 00:48:30.810 Run status group 0 (all jobs): 00:48:30.810 READ: bw=85.2MiB/s (89.3MB/s), 27.2MiB/s-29.9MiB/s (28.5MB/s-31.3MB/s), io=426MiB (447MB), run=5004-5004msec 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:30.810 bdev_null0 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:30.810 [2024-10-01 20:47:25.139035] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:30.810 bdev_null1 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:30.810 bdev_null2 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:48:30.810 { 00:48:30.810 "params": { 00:48:30.810 "name": "Nvme$subsystem", 00:48:30.810 "trtype": "$TEST_TRANSPORT", 00:48:30.810 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:30.810 "adrfam": "ipv4", 00:48:30.810 "trsvcid": "$NVMF_PORT", 00:48:30.810 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:30.810 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:30.810 "hdgst": ${hdgst:-false}, 00:48:30.810 "ddgst": ${ddgst:-false} 00:48:30.810 }, 00:48:30.810 "method": "bdev_nvme_attach_controller" 00:48:30.810 } 00:48:30.810 EOF 00:48:30.810 )") 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:48:30.810 { 00:48:30.810 "params": { 00:48:30.810 "name": "Nvme$subsystem", 00:48:30.810 "trtype": "$TEST_TRANSPORT", 00:48:30.810 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:30.810 "adrfam": "ipv4", 00:48:30.810 "trsvcid": "$NVMF_PORT", 00:48:30.810 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:30.810 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:30.810 "hdgst": ${hdgst:-false}, 00:48:30.810 "ddgst": ${ddgst:-false} 00:48:30.810 }, 00:48:30.810 "method": "bdev_nvme_attach_controller" 00:48:30.810 } 00:48:30.810 EOF 00:48:30.810 )") 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:48:30.810 { 00:48:30.810 "params": { 00:48:30.810 "name": "Nvme$subsystem", 00:48:30.810 "trtype": "$TEST_TRANSPORT", 00:48:30.810 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:30.810 "adrfam": "ipv4", 00:48:30.810 "trsvcid": "$NVMF_PORT", 00:48:30.810 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:30.810 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:30.810 "hdgst": ${hdgst:-false}, 00:48:30.810 "ddgst": ${ddgst:-false} 00:48:30.810 }, 00:48:30.810 "method": "bdev_nvme_attach_controller" 00:48:30.810 } 00:48:30.810 EOF 00:48:30.810 )") 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:48:30.810 "params": { 00:48:30.810 "name": "Nvme0", 00:48:30.810 "trtype": "tcp", 00:48:30.810 "traddr": "10.0.0.3", 00:48:30.810 "adrfam": "ipv4", 00:48:30.810 "trsvcid": "4420", 00:48:30.810 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:30.810 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:30.810 "hdgst": false, 00:48:30.810 "ddgst": false 00:48:30.810 }, 00:48:30.810 "method": "bdev_nvme_attach_controller" 00:48:30.810 },{ 00:48:30.810 "params": { 00:48:30.810 "name": "Nvme1", 00:48:30.810 "trtype": "tcp", 00:48:30.810 "traddr": "10.0.0.3", 00:48:30.810 "adrfam": "ipv4", 00:48:30.810 "trsvcid": "4420", 00:48:30.810 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:48:30.810 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:48:30.810 "hdgst": false, 00:48:30.810 "ddgst": false 00:48:30.810 }, 00:48:30.810 "method": "bdev_nvme_attach_controller" 00:48:30.810 },{ 00:48:30.810 "params": { 00:48:30.810 "name": "Nvme2", 00:48:30.810 "trtype": "tcp", 00:48:30.810 "traddr": "10.0.0.3", 00:48:30.810 "adrfam": "ipv4", 00:48:30.810 "trsvcid": "4420", 00:48:30.810 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:48:30.810 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:48:30.810 "hdgst": false, 00:48:30.810 "ddgst": false 00:48:30.810 }, 00:48:30.810 "method": "bdev_nvme_attach_controller" 00:48:30.810 }' 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:48:30.810 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:48:30.811 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:48:30.811 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:48:30.811 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:48:30.811 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:48:30.811 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:48:30.811 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:48:30.811 20:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:30.811 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:48:30.811 ... 00:48:30.811 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:48:30.811 ... 00:48:30.811 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:48:30.811 ... 00:48:30.811 fio-3.35 00:48:30.811 Starting 24 threads 00:48:43.007 00:48:43.008 filename0: (groupid=0, jobs=1): err= 0: pid=108220: Tue Oct 1 20:47:36 2024 00:48:43.008 read: IOPS=73, BW=294KiB/s (301kB/s)(2944KiB/10027msec) 00:48:43.008 slat (usec): min=4, max=4019, avg=18.17, stdev=147.79 00:48:43.008 clat (msec): min=42, max=468, avg=217.84, stdev=123.74 00:48:43.008 lat (msec): min=42, max=468, avg=217.86, stdev=123.73 00:48:43.008 clat percentiles (msec): 00:48:43.008 | 1.00th=[ 43], 5.00th=[ 71], 10.00th=[ 78], 20.00th=[ 86], 00:48:43.008 | 30.00th=[ 112], 40.00th=[ 123], 50.00th=[ 241], 60.00th=[ 266], 00:48:43.008 | 70.00th=[ 313], 80.00th=[ 330], 90.00th=[ 384], 95.00th=[ 435], 00:48:43.008 | 99.00th=[ 468], 99.50th=[ 468], 99.90th=[ 468], 99.95th=[ 468], 00:48:43.008 | 99.99th=[ 468] 00:48:43.008 bw ( KiB/s): min= 126, max= 718, per=3.73%, avg=287.80, stdev=192.55, samples=20 00:48:43.008 iops : min= 31, max= 179, avg=71.90, stdev=48.10, samples=20 00:48:43.008 lat (msec) : 50=3.40%, 100=19.16%, 250=32.07%, 500=45.38% 00:48:43.008 cpu : usr=31.94%, sys=1.17%, ctx=1024, majf=0, minf=9 00:48:43.008 IO depths : 1=3.8%, 2=8.3%, 4=20.0%, 8=59.1%, 16=8.8%, 32=0.0%, >=64=0.0% 00:48:43.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.008 complete : 0=0.0%, 4=92.8%, 8=1.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.008 issued rwts: total=736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:43.008 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:43.008 filename0: (groupid=0, jobs=1): err= 0: pid=108221: Tue Oct 1 20:47:36 2024 00:48:43.008 read: IOPS=84, BW=338KiB/s (347kB/s)(3392KiB/10023msec) 00:48:43.008 slat (usec): min=4, max=8057, avg=36.18, stdev=413.45 00:48:43.008 clat (msec): min=18, max=571, avg=188.65, stdev=118.21 00:48:43.008 lat (msec): min=18, max=571, avg=188.69, stdev=118.24 00:48:43.008 clat percentiles (msec): 00:48:43.008 | 1.00th=[ 19], 5.00th=[ 54], 10.00th=[ 58], 20.00th=[ 72], 00:48:43.008 | 30.00th=[ 92], 40.00th=[ 117], 50.00th=[ 192], 60.00th=[ 226], 00:48:43.008 | 70.00th=[ 245], 80.00th=[ 300], 90.00th=[ 368], 95.00th=[ 409], 00:48:43.008 | 99.00th=[ 439], 99.50th=[ 468], 99.90th=[ 575], 99.95th=[ 575], 00:48:43.008 | 99.99th=[ 575] 00:48:43.008 bw ( KiB/s): min= 128, max= 896, per=4.37%, avg=336.80, stdev=244.50, samples=20 00:48:43.008 iops : min= 32, max= 224, avg=84.20, stdev=61.13, samples=20 00:48:43.008 lat (msec) : 20=1.89%, 50=0.71%, 100=33.49%, 250=35.61%, 500=27.83% 00:48:43.008 lat (msec) : 750=0.47% 00:48:43.008 cpu : usr=35.10%, sys=1.36%, ctx=970, majf=0, minf=9 00:48:43.008 IO depths : 1=2.4%, 2=6.1%, 4=16.7%, 8=64.3%, 16=10.5%, 32=0.0%, >=64=0.0% 00:48:43.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.008 complete : 0=0.0%, 4=92.0%, 8=2.7%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.008 issued rwts: total=848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:43.008 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:43.008 filename0: (groupid=0, jobs=1): err= 0: pid=108222: Tue Oct 1 20:47:36 2024 00:48:43.008 read: IOPS=93, BW=374KiB/s (383kB/s)(3752KiB/10028msec) 00:48:43.008 slat (usec): min=4, max=533, avg=14.82, stdev=19.50 00:48:43.008 clat (msec): min=32, max=427, avg=170.79, stdev=105.37 00:48:43.008 lat (msec): min=32, max=427, avg=170.80, stdev=105.37 00:48:43.008 clat percentiles (msec): 00:48:43.008 | 1.00th=[ 33], 5.00th=[ 52], 10.00th=[ 57], 20.00th=[ 71], 00:48:43.008 | 30.00th=[ 75], 40.00th=[ 96], 50.00th=[ 174], 60.00th=[ 215], 00:48:43.008 | 70.00th=[ 239], 80.00th=[ 251], 90.00th=[ 317], 95.00th=[ 368], 00:48:43.008 | 99.00th=[ 426], 99.50th=[ 426], 99.90th=[ 426], 99.95th=[ 426], 00:48:43.008 | 99.99th=[ 426] 00:48:43.008 bw ( KiB/s): min= 128, max= 960, per=4.83%, avg=371.20, stdev=253.44, samples=20 00:48:43.008 iops : min= 32, max= 240, avg=92.80, stdev=63.36, samples=20 00:48:43.008 lat (msec) : 50=4.48%, 100=38.59%, 250=33.80%, 500=23.13% 00:48:43.008 cpu : usr=36.37%, sys=1.17%, ctx=1052, majf=0, minf=9 00:48:43.008 IO depths : 1=1.2%, 2=2.8%, 4=10.2%, 8=73.8%, 16=12.0%, 32=0.0%, >=64=0.0% 00:48:43.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.008 complete : 0=0.0%, 4=90.0%, 8=5.1%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.008 issued rwts: total=938,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:43.008 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:43.008 filename0: (groupid=0, jobs=1): err= 0: pid=108223: Tue Oct 1 20:47:36 2024 00:48:43.008 read: IOPS=76, BW=306KiB/s (314kB/s)(3072KiB/10027msec) 00:48:43.008 slat (usec): min=4, max=4047, avg=19.68, stdev=145.80 00:48:43.008 clat (msec): min=44, max=431, avg=208.55, stdev=100.96 00:48:43.008 lat (msec): min=44, max=431, avg=208.57, stdev=100.98 00:48:43.008 clat percentiles (msec): 00:48:43.008 | 1.00th=[ 46], 5.00th=[ 71], 10.00th=[ 79], 20.00th=[ 103], 00:48:43.008 | 30.00th=[ 132], 40.00th=[ 163], 50.00th=[ 211], 60.00th=[ 232], 00:48:43.008 | 70.00th=[ 275], 80.00th=[ 309], 90.00th=[ 347], 95.00th=[ 372], 00:48:43.008 | 99.00th=[ 430], 99.50th=[ 430], 99.90th=[ 430], 99.95th=[ 430], 00:48:43.008 | 99.99th=[ 430] 00:48:43.008 bw ( KiB/s): min= 126, max= 758, per=3.93%, avg=302.90, stdev=178.95, samples=20 00:48:43.008 iops : min= 31, max= 189, avg=75.65, stdev=44.72, samples=20 00:48:43.008 lat (msec) : 50=2.08%, 100=17.06%, 250=47.53%, 500=33.33% 00:48:43.008 cpu : usr=40.40%, sys=1.36%, ctx=1202, majf=0, minf=9 00:48:43.008 IO depths : 1=3.8%, 2=7.6%, 4=16.9%, 8=62.4%, 16=9.4%, 32=0.0%, >=64=0.0% 00:48:43.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.008 complete : 0=0.0%, 4=91.9%, 8=3.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.008 issued rwts: total=768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:43.008 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:43.008 filename0: (groupid=0, jobs=1): err= 0: pid=108224: Tue Oct 1 20:47:36 2024 00:48:43.008 read: IOPS=88, BW=355KiB/s (363kB/s)(3560KiB/10037msec) 00:48:43.008 slat (nsec): min=4805, max=57551, avg=11546.60, stdev=6601.62 00:48:43.008 clat (msec): min=34, max=421, avg=180.00, stdev=100.52 00:48:43.008 lat (msec): min=34, max=421, avg=180.01, stdev=100.52 00:48:43.008 clat percentiles (msec): 00:48:43.008 | 1.00th=[ 36], 5.00th=[ 58], 10.00th=[ 70], 20.00th=[ 75], 00:48:43.008 | 30.00th=[ 92], 40.00th=[ 116], 50.00th=[ 182], 60.00th=[ 226], 00:48:43.008 | 70.00th=[ 241], 80.00th=[ 264], 90.00th=[ 317], 95.00th=[ 338], 00:48:43.008 | 99.00th=[ 422], 99.50th=[ 422], 99.90th=[ 422], 99.95th=[ 422], 00:48:43.008 | 99.99th=[ 422] 00:48:43.008 bw ( KiB/s): min= 176, max= 896, per=4.59%, avg=353.45, stdev=226.47, samples=20 00:48:43.008 iops : min= 44, max= 224, avg=88.30, stdev=56.58, samples=20 00:48:43.008 lat (msec) : 50=3.37%, 100=33.03%, 250=36.85%, 500=26.74% 00:48:43.008 cpu : usr=32.90%, sys=1.30%, ctx=887, majf=0, minf=9 00:48:43.008 IO depths : 1=1.0%, 2=2.1%, 4=8.1%, 8=75.8%, 16=12.9%, 32=0.0%, >=64=0.0% 00:48:43.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.008 complete : 0=0.0%, 4=89.7%, 8=6.2%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.008 issued rwts: total=890,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:43.008 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:43.008 filename0: (groupid=0, jobs=1): err= 0: pid=108225: Tue Oct 1 20:47:36 2024 00:48:43.008 read: IOPS=70, BW=283KiB/s (290kB/s)(2840KiB/10019msec) 00:48:43.008 slat (usec): min=4, max=2050, avg=17.71, stdev=76.97 00:48:43.008 clat (msec): min=26, max=492, avg=225.50, stdev=125.20 00:48:43.008 lat (msec): min=26, max=492, avg=225.51, stdev=125.21 00:48:43.008 clat percentiles (msec): 00:48:43.008 | 1.00th=[ 32], 5.00th=[ 69], 10.00th=[ 79], 20.00th=[ 104], 00:48:43.008 | 30.00th=[ 123], 40.00th=[ 153], 50.00th=[ 232], 60.00th=[ 271], 00:48:43.008 | 70.00th=[ 313], 80.00th=[ 347], 90.00th=[ 384], 95.00th=[ 456], 00:48:43.008 | 99.00th=[ 481], 99.50th=[ 493], 99.90th=[ 493], 99.95th=[ 493], 00:48:43.008 | 99.99th=[ 493] 00:48:43.008 bw ( KiB/s): min= 127, max= 656, per=3.36%, avg=258.42, stdev=153.12, samples=19 00:48:43.008 iops : min= 31, max= 164, avg=64.53, stdev=38.35, samples=19 00:48:43.008 lat (msec) : 50=2.68%, 100=13.24%, 250=39.30%, 500=44.79% 00:48:43.008 cpu : usr=38.69%, sys=1.23%, ctx=1246, majf=0, minf=9 00:48:43.008 IO depths : 1=3.9%, 2=9.0%, 4=21.5%, 8=56.8%, 16=8.7%, 32=0.0%, >=64=0.0% 00:48:43.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.008 complete : 0=0.0%, 4=93.3%, 8=1.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.008 issued rwts: total=710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:43.008 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:43.008 filename0: (groupid=0, jobs=1): err= 0: pid=108226: Tue Oct 1 20:47:36 2024 00:48:43.008 read: IOPS=70, BW=281KiB/s (288kB/s)(2816KiB/10007msec) 00:48:43.008 slat (usec): min=4, max=8018, avg=35.27, stdev=337.16 00:48:43.008 clat (msec): min=8, max=492, avg=227.10, stdev=137.15 00:48:43.008 lat (msec): min=8, max=492, avg=227.14, stdev=137.14 00:48:43.008 clat percentiles (msec): 00:48:43.008 | 1.00th=[ 9], 5.00th=[ 48], 10.00th=[ 70], 20.00th=[ 99], 00:48:43.008 | 30.00th=[ 117], 40.00th=[ 127], 50.00th=[ 228], 60.00th=[ 305], 00:48:43.008 | 70.00th=[ 321], 80.00th=[ 347], 90.00th=[ 422], 95.00th=[ 439], 00:48:43.008 | 99.00th=[ 493], 99.50th=[ 493], 99.90th=[ 493], 99.95th=[ 493], 00:48:43.008 | 99.99th=[ 493] 00:48:43.008 bw ( KiB/s): min= 128, max= 768, per=3.24%, avg=249.26, stdev=172.02, samples=19 00:48:43.008 iops : min= 32, max= 192, avg=62.32, stdev=43.01, samples=19 00:48:43.009 lat (msec) : 10=2.56%, 20=1.99%, 50=1.28%, 100=14.35%, 250=32.95% 00:48:43.009 lat (msec) : 500=46.88% 00:48:43.009 cpu : usr=36.36%, sys=1.05%, ctx=1057, majf=0, minf=9 00:48:43.009 IO depths : 1=3.8%, 2=9.1%, 4=21.9%, 8=56.5%, 16=8.7%, 32=0.0%, >=64=0.0% 00:48:43.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.009 complete : 0=0.0%, 4=93.3%, 8=0.9%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.009 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:43.009 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:43.009 filename0: (groupid=0, jobs=1): err= 0: pid=108227: Tue Oct 1 20:47:36 2024 00:48:43.009 read: IOPS=70, BW=281KiB/s (288kB/s)(2816KiB/10004msec) 00:48:43.009 slat (usec): min=4, max=4065, avg=23.11, stdev=152.97 00:48:43.009 clat (msec): min=7, max=491, avg=227.15, stdev=143.14 00:48:43.009 lat (msec): min=7, max=491, avg=227.18, stdev=143.15 00:48:43.009 clat percentiles (msec): 00:48:43.009 | 1.00th=[ 8], 5.00th=[ 27], 10.00th=[ 70], 20.00th=[ 87], 00:48:43.009 | 30.00th=[ 111], 40.00th=[ 118], 50.00th=[ 167], 60.00th=[ 313], 00:48:43.009 | 70.00th=[ 347], 80.00th=[ 355], 90.00th=[ 430], 95.00th=[ 435], 00:48:43.009 | 99.00th=[ 481], 99.50th=[ 493], 99.90th=[ 493], 99.95th=[ 493], 00:48:43.009 | 99.99th=[ 493] 00:48:43.009 bw ( KiB/s): min= 128, max= 768, per=3.15%, avg=242.53, stdev=174.09, samples=19 00:48:43.009 iops : min= 32, max= 192, avg=60.63, stdev=43.52, samples=19 00:48:43.009 lat (msec) : 10=4.55%, 50=2.27%, 100=19.18%, 250=25.57%, 500=48.44% 00:48:43.009 cpu : usr=41.96%, sys=1.20%, ctx=1393, majf=0, minf=9 00:48:43.009 IO depths : 1=4.5%, 2=9.2%, 4=20.0%, 8=58.2%, 16=8.0%, 32=0.0%, >=64=0.0% 00:48:43.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.009 complete : 0=0.0%, 4=92.8%, 8=1.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.009 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:43.009 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:43.009 filename1: (groupid=0, jobs=1): err= 0: pid=108228: Tue Oct 1 20:47:36 2024 00:48:43.009 read: IOPS=101, BW=406KiB/s (416kB/s)(4072KiB/10034msec) 00:48:43.009 slat (nsec): min=4867, max=83441, avg=13330.62, stdev=8591.40 00:48:43.009 clat (msec): min=6, max=383, avg=157.41, stdev=102.67 00:48:43.009 lat (msec): min=6, max=383, avg=157.42, stdev=102.67 00:48:43.009 clat percentiles (msec): 00:48:43.009 | 1.00th=[ 7], 5.00th=[ 35], 10.00th=[ 48], 20.00th=[ 64], 00:48:43.009 | 30.00th=[ 72], 40.00th=[ 84], 50.00th=[ 120], 60.00th=[ 213], 00:48:43.009 | 70.00th=[ 239], 80.00th=[ 255], 90.00th=[ 296], 95.00th=[ 321], 00:48:43.009 | 99.00th=[ 384], 99.50th=[ 384], 99.90th=[ 384], 99.95th=[ 384], 00:48:43.009 | 99.99th=[ 384] 00:48:43.009 bw ( KiB/s): min= 128, max= 1111, per=5.23%, avg=402.70, stdev=283.74, samples=20 00:48:43.009 iops : min= 32, max= 277, avg=100.60, stdev=70.84, samples=20 00:48:43.009 lat (msec) : 10=2.65%, 20=2.06%, 50=8.64%, 100=33.40%, 250=29.08% 00:48:43.009 lat (msec) : 500=24.17% 00:48:43.009 cpu : usr=37.14%, sys=1.05%, ctx=1057, majf=0, minf=9 00:48:43.009 IO depths : 1=1.8%, 2=3.5%, 4=11.1%, 8=72.5%, 16=11.1%, 32=0.0%, >=64=0.0% 00:48:43.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.009 complete : 0=0.0%, 4=90.1%, 8=4.7%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.009 issued rwts: total=1018,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:43.009 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:43.009 filename1: (groupid=0, jobs=1): err= 0: pid=108229: Tue Oct 1 20:47:36 2024 00:48:43.009 read: IOPS=70, BW=281KiB/s (288kB/s)(2816KiB/10005msec) 00:48:43.009 slat (usec): min=5, max=8027, avg=22.83, stdev=302.17 00:48:43.009 clat (msec): min=10, max=472, avg=227.23, stdev=132.70 00:48:43.009 lat (msec): min=10, max=472, avg=227.25, stdev=132.69 00:48:43.009 clat percentiles (msec): 00:48:43.009 | 1.00th=[ 11], 5.00th=[ 64], 10.00th=[ 73], 20.00th=[ 101], 00:48:43.009 | 30.00th=[ 113], 40.00th=[ 129], 50.00th=[ 239], 60.00th=[ 309], 00:48:43.009 | 70.00th=[ 338], 80.00th=[ 368], 90.00th=[ 384], 95.00th=[ 430], 00:48:43.009 | 99.00th=[ 472], 99.50th=[ 472], 99.90th=[ 472], 99.95th=[ 472], 00:48:43.009 | 99.99th=[ 472] 00:48:43.009 bw ( KiB/s): min= 128, max= 640, per=3.33%, avg=256.00, stdev=164.04, samples=19 00:48:43.009 iops : min= 32, max= 160, avg=64.00, stdev=41.01, samples=19 00:48:43.009 lat (msec) : 20=2.27%, 50=0.99%, 100=16.48%, 250=32.81%, 500=47.44% 00:48:43.009 cpu : usr=40.80%, sys=1.35%, ctx=1224, majf=0, minf=9 00:48:43.009 IO depths : 1=5.0%, 2=10.5%, 4=22.7%, 8=54.3%, 16=7.5%, 32=0.0%, >=64=0.0% 00:48:43.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.009 complete : 0=0.0%, 4=93.5%, 8=0.7%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.009 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:43.009 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:43.009 filename1: (groupid=0, jobs=1): err= 0: pid=108230: Tue Oct 1 20:47:36 2024 00:48:43.009 read: IOPS=95, BW=380KiB/s (389kB/s)(3824KiB/10055msec) 00:48:43.009 slat (usec): min=4, max=4059, avg=19.72, stdev=131.28 00:48:43.009 clat (msec): min=8, max=561, avg=168.11, stdev=105.35 00:48:43.009 lat (msec): min=8, max=561, avg=168.13, stdev=105.36 00:48:43.009 clat percentiles (msec): 00:48:43.009 | 1.00th=[ 9], 5.00th=[ 45], 10.00th=[ 51], 20.00th=[ 73], 00:48:43.009 | 30.00th=[ 81], 40.00th=[ 99], 50.00th=[ 174], 60.00th=[ 211], 00:48:43.009 | 70.00th=[ 232], 80.00th=[ 257], 90.00th=[ 305], 95.00th=[ 355], 00:48:43.009 | 99.00th=[ 468], 99.50th=[ 468], 99.90th=[ 558], 99.95th=[ 558], 00:48:43.009 | 99.99th=[ 558] 00:48:43.009 bw ( KiB/s): min= 128, max= 1024, per=4.88%, avg=375.85, stdev=262.61, samples=20 00:48:43.009 iops : min= 32, max= 256, avg=93.95, stdev=65.66, samples=20 00:48:43.009 lat (msec) : 10=1.67%, 50=7.32%, 100=32.64%, 250=36.51%, 500=21.76% 00:48:43.009 lat (msec) : 750=0.10% 00:48:43.009 cpu : usr=43.04%, sys=1.53%, ctx=1361, majf=0, minf=9 00:48:43.009 IO depths : 1=1.9%, 2=5.9%, 4=18.8%, 8=62.8%, 16=10.7%, 32=0.0%, >=64=0.0% 00:48:43.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.009 complete : 0=0.0%, 4=92.2%, 8=2.3%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.009 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:43.009 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:43.009 filename1: (groupid=0, jobs=1): err= 0: pid=108231: Tue Oct 1 20:47:36 2024 00:48:43.009 read: IOPS=74, BW=299KiB/s (306kB/s)(2992KiB/10013msec) 00:48:43.009 slat (usec): min=4, max=8023, avg=23.92, stdev=292.95 00:48:43.009 clat (msec): min=26, max=432, avg=214.02, stdev=130.82 00:48:43.009 lat (msec): min=26, max=432, avg=214.04, stdev=130.81 00:48:43.009 clat percentiles (msec): 00:48:43.009 | 1.00th=[ 42], 5.00th=[ 58], 10.00th=[ 71], 20.00th=[ 84], 00:48:43.009 | 30.00th=[ 108], 40.00th=[ 121], 50.00th=[ 159], 60.00th=[ 305], 00:48:43.009 | 70.00th=[ 321], 80.00th=[ 326], 90.00th=[ 397], 95.00th=[ 430], 00:48:43.009 | 99.00th=[ 435], 99.50th=[ 435], 99.90th=[ 435], 99.95th=[ 435], 00:48:43.009 | 99.99th=[ 435] 00:48:43.009 bw ( KiB/s): min= 127, max= 832, per=3.56%, avg=274.42, stdev=200.79, samples=19 00:48:43.009 iops : min= 31, max= 208, avg=68.53, stdev=50.26, samples=19 00:48:43.009 lat (msec) : 50=4.28%, 100=21.79%, 250=27.14%, 500=46.79% 00:48:43.009 cpu : usr=32.43%, sys=1.17%, ctx=865, majf=0, minf=9 00:48:43.009 IO depths : 1=3.6%, 2=7.6%, 4=17.9%, 8=61.5%, 16=9.4%, 32=0.0%, >=64=0.0% 00:48:43.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.009 complete : 0=0.0%, 4=92.3%, 8=2.4%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.009 issued rwts: total=748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:43.009 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:43.009 filename1: (groupid=0, jobs=1): err= 0: pid=108232: Tue Oct 1 20:47:36 2024 00:48:43.009 read: IOPS=73, BW=294KiB/s (301kB/s)(2944KiB/10001msec) 00:48:43.009 slat (usec): min=4, max=8054, avg=26.41, stdev=296.48 00:48:43.009 clat (msec): min=3, max=587, avg=217.25, stdev=155.79 00:48:43.009 lat (msec): min=3, max=587, avg=217.28, stdev=155.80 00:48:43.009 clat percentiles (msec): 00:48:43.009 | 1.00th=[ 4], 5.00th=[ 4], 10.00th=[ 11], 20.00th=[ 83], 00:48:43.009 | 30.00th=[ 105], 40.00th=[ 117], 50.00th=[ 126], 60.00th=[ 313], 00:48:43.009 | 70.00th=[ 342], 80.00th=[ 372], 90.00th=[ 414], 95.00th=[ 435], 00:48:43.009 | 99.00th=[ 592], 99.50th=[ 592], 99.90th=[ 592], 99.95th=[ 592], 00:48:43.009 | 99.99th=[ 592] 00:48:43.009 bw ( KiB/s): min= 128, max= 640, per=3.33%, avg=256.00, stdev=167.36, samples=18 00:48:43.009 iops : min= 32, max= 160, avg=64.00, stdev=41.84, samples=18 00:48:43.009 lat (msec) : 4=6.52%, 10=2.17%, 20=2.17%, 50=0.68%, 100=16.44% 00:48:43.009 lat (msec) : 250=27.45%, 500=42.39%, 750=2.17% 00:48:43.009 cpu : usr=39.47%, sys=1.25%, ctx=1076, majf=0, minf=9 00:48:43.009 IO depths : 1=5.0%, 2=10.3%, 4=22.1%, 8=55.0%, 16=7.5%, 32=0.0%, >=64=0.0% 00:48:43.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.009 complete : 0=0.0%, 4=93.3%, 8=0.9%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.009 issued rwts: total=736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:43.009 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:43.009 filename1: (groupid=0, jobs=1): err= 0: pid=108233: Tue Oct 1 20:47:36 2024 00:48:43.009 read: IOPS=71, BW=288KiB/s (295kB/s)(2880KiB/10007msec) 00:48:43.009 slat (nsec): min=4713, max=76147, avg=20385.29, stdev=13178.81 00:48:43.010 clat (msec): min=26, max=471, avg=222.22, stdev=124.60 00:48:43.010 lat (msec): min=26, max=471, avg=222.24, stdev=124.61 00:48:43.010 clat percentiles (msec): 00:48:43.010 | 1.00th=[ 42], 5.00th=[ 70], 10.00th=[ 72], 20.00th=[ 96], 00:48:43.010 | 30.00th=[ 112], 40.00th=[ 153], 50.00th=[ 207], 60.00th=[ 253], 00:48:43.010 | 70.00th=[ 309], 80.00th=[ 347], 90.00th=[ 388], 95.00th=[ 418], 00:48:43.010 | 99.00th=[ 472], 99.50th=[ 472], 99.90th=[ 472], 99.95th=[ 472], 00:48:43.010 | 99.99th=[ 472] 00:48:43.010 bw ( KiB/s): min= 128, max= 768, per=3.50%, avg=269.47, stdev=176.42, samples=19 00:48:43.010 iops : min= 32, max= 192, avg=67.37, stdev=44.11, samples=19 00:48:43.010 lat (msec) : 50=2.78%, 100=19.44%, 250=37.64%, 500=40.14% 00:48:43.010 cpu : usr=37.44%, sys=1.19%, ctx=1108, majf=0, minf=9 00:48:43.010 IO depths : 1=3.1%, 2=6.4%, 4=16.1%, 8=64.4%, 16=10.0%, 32=0.0%, >=64=0.0% 00:48:43.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.010 complete : 0=0.0%, 4=91.8%, 8=2.9%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.010 issued rwts: total=720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:43.010 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:43.010 filename1: (groupid=0, jobs=1): err= 0: pid=108234: Tue Oct 1 20:47:36 2024 00:48:43.010 read: IOPS=68, BW=273KiB/s (279kB/s)(2728KiB/10010msec) 00:48:43.010 slat (nsec): min=4903, max=42399, avg=11996.10, stdev=5326.93 00:48:43.010 clat (msec): min=20, max=527, avg=234.64, stdev=137.07 00:48:43.010 lat (msec): min=20, max=527, avg=234.65, stdev=137.07 00:48:43.010 clat percentiles (msec): 00:48:43.010 | 1.00th=[ 21], 5.00th=[ 70], 10.00th=[ 74], 20.00th=[ 99], 00:48:43.010 | 30.00th=[ 108], 40.00th=[ 121], 50.00th=[ 296], 60.00th=[ 313], 00:48:43.010 | 70.00th=[ 338], 80.00th=[ 368], 90.00th=[ 397], 95.00th=[ 430], 00:48:43.010 | 99.00th=[ 527], 99.50th=[ 527], 99.90th=[ 527], 99.95th=[ 527], 00:48:43.010 | 99.99th=[ 527] 00:48:43.010 bw ( KiB/s): min= 128, max= 640, per=3.24%, avg=249.26, stdev=165.36, samples=19 00:48:43.010 iops : min= 32, max= 160, avg=62.32, stdev=41.34, samples=19 00:48:43.010 lat (msec) : 50=2.35%, 100=21.85%, 250=24.93%, 500=48.53%, 750=2.35% 00:48:43.010 cpu : usr=31.98%, sys=1.18%, ctx=1003, majf=0, minf=9 00:48:43.010 IO depths : 1=4.3%, 2=9.1%, 4=21.6%, 8=56.9%, 16=8.2%, 32=0.0%, >=64=0.0% 00:48:43.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.010 complete : 0=0.0%, 4=92.8%, 8=1.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.010 issued rwts: total=682,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:43.010 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:43.010 filename1: (groupid=0, jobs=1): err= 0: pid=108235: Tue Oct 1 20:47:36 2024 00:48:43.010 read: IOPS=77, BW=310KiB/s (317kB/s)(3100KiB/10001msec) 00:48:43.010 slat (usec): min=4, max=8055, avg=31.28, stdev=323.06 00:48:43.010 clat (msec): min=2, max=589, avg=206.30, stdev=153.92 00:48:43.010 lat (msec): min=2, max=589, avg=206.34, stdev=153.94 00:48:43.010 clat percentiles (msec): 00:48:43.010 | 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 8], 20.00th=[ 71], 00:48:43.010 | 30.00th=[ 105], 40.00th=[ 115], 50.00th=[ 121], 60.00th=[ 300], 00:48:43.010 | 70.00th=[ 338], 80.00th=[ 363], 90.00th=[ 380], 95.00th=[ 443], 00:48:43.010 | 99.00th=[ 592], 99.50th=[ 592], 99.90th=[ 592], 99.95th=[ 592], 00:48:43.010 | 99.99th=[ 592] 00:48:43.010 bw ( KiB/s): min= 128, max= 816, per=3.42%, avg=263.56, stdev=191.58, samples=18 00:48:43.010 iops : min= 32, max= 204, avg=65.89, stdev=47.89, samples=18 00:48:43.010 lat (msec) : 4=8.26%, 10=4.13%, 50=2.06%, 100=15.23%, 250=26.97% 00:48:43.010 lat (msec) : 500=41.29%, 750=2.06% 00:48:43.010 cpu : usr=37.13%, sys=1.18%, ctx=1005, majf=0, minf=9 00:48:43.010 IO depths : 1=3.7%, 2=9.8%, 4=21.7%, 8=55.7%, 16=9.0%, 32=0.0%, >=64=0.0% 00:48:43.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.010 complete : 0=0.0%, 4=93.4%, 8=1.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.010 issued rwts: total=775,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:43.010 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:43.010 filename2: (groupid=0, jobs=1): err= 0: pid=108236: Tue Oct 1 20:47:36 2024 00:48:43.010 read: IOPS=81, BW=325KiB/s (333kB/s)(3264KiB/10032msec) 00:48:43.010 slat (usec): min=4, max=4044, avg=18.67, stdev=141.35 00:48:43.010 clat (msec): min=35, max=390, avg=196.49, stdev=108.74 00:48:43.010 lat (msec): min=35, max=390, avg=196.51, stdev=108.74 00:48:43.010 clat percentiles (msec): 00:48:43.010 | 1.00th=[ 36], 5.00th=[ 55], 10.00th=[ 72], 20.00th=[ 84], 00:48:43.010 | 30.00th=[ 105], 40.00th=[ 121], 50.00th=[ 209], 60.00th=[ 243], 00:48:43.010 | 70.00th=[ 284], 80.00th=[ 309], 90.00th=[ 330], 95.00th=[ 372], 00:48:43.010 | 99.00th=[ 393], 99.50th=[ 393], 99.90th=[ 393], 99.95th=[ 393], 00:48:43.010 | 99.99th=[ 393] 00:48:43.010 bw ( KiB/s): min= 128, max= 768, per=4.15%, avg=319.65, stdev=204.76, samples=20 00:48:43.010 iops : min= 32, max= 192, avg=79.90, stdev=51.16, samples=20 00:48:43.010 lat (msec) : 50=3.92%, 100=24.51%, 250=34.31%, 500=37.25% 00:48:43.010 cpu : usr=45.33%, sys=1.59%, ctx=1261, majf=0, minf=9 00:48:43.010 IO depths : 1=5.5%, 2=11.4%, 4=24.0%, 8=52.1%, 16=7.0%, 32=0.0%, >=64=0.0% 00:48:43.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.010 complete : 0=0.0%, 4=93.8%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.010 issued rwts: total=816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:43.010 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:43.010 filename2: (groupid=0, jobs=1): err= 0: pid=108237: Tue Oct 1 20:47:36 2024 00:48:43.010 read: IOPS=73, BW=293KiB/s (300kB/s)(2932KiB/10003msec) 00:48:43.010 slat (nsec): min=4758, max=63547, avg=15506.48, stdev=9453.74 00:48:43.010 clat (msec): min=56, max=438, avg=218.22, stdev=111.83 00:48:43.010 lat (msec): min=56, max=438, avg=218.23, stdev=111.84 00:48:43.010 clat percentiles (msec): 00:48:43.010 | 1.00th=[ 70], 5.00th=[ 71], 10.00th=[ 74], 20.00th=[ 104], 00:48:43.010 | 30.00th=[ 123], 40.00th=[ 157], 50.00th=[ 215], 60.00th=[ 264], 00:48:43.010 | 70.00th=[ 305], 80.00th=[ 326], 90.00th=[ 359], 95.00th=[ 430], 00:48:43.010 | 99.00th=[ 435], 99.50th=[ 435], 99.90th=[ 439], 99.95th=[ 439], 00:48:43.010 | 99.99th=[ 439] 00:48:43.010 bw ( KiB/s): min= 128, max= 880, per=3.56%, avg=274.95, stdev=185.68, samples=19 00:48:43.010 iops : min= 32, max= 220, avg=68.74, stdev=46.42, samples=19 00:48:43.010 lat (msec) : 100=17.33%, 250=42.02%, 500=40.65% 00:48:43.010 cpu : usr=39.03%, sys=1.29%, ctx=1203, majf=0, minf=9 00:48:43.010 IO depths : 1=3.3%, 2=7.4%, 4=19.1%, 8=60.8%, 16=9.4%, 32=0.0%, >=64=0.0% 00:48:43.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.010 complete : 0=0.0%, 4=92.5%, 8=1.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.010 issued rwts: total=733,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:43.010 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:43.010 filename2: (groupid=0, jobs=1): err= 0: pid=108238: Tue Oct 1 20:47:36 2024 00:48:43.010 read: IOPS=83, BW=332KiB/s (340kB/s)(3336KiB/10040msec) 00:48:43.010 slat (nsec): min=4803, max=59792, avg=12298.86, stdev=6465.88 00:48:43.010 clat (msec): min=45, max=479, avg=192.05, stdev=95.44 00:48:43.010 lat (msec): min=45, max=479, avg=192.07, stdev=95.44 00:48:43.010 clat percentiles (msec): 00:48:43.010 | 1.00th=[ 46], 5.00th=[ 68], 10.00th=[ 73], 20.00th=[ 99], 00:48:43.010 | 30.00th=[ 120], 40.00th=[ 144], 50.00th=[ 205], 60.00th=[ 226], 00:48:43.010 | 70.00th=[ 241], 80.00th=[ 275], 90.00th=[ 313], 95.00th=[ 372], 00:48:43.010 | 99.00th=[ 468], 99.50th=[ 468], 99.90th=[ 481], 99.95th=[ 481], 00:48:43.010 | 99.99th=[ 481] 00:48:43.010 bw ( KiB/s): min= 208, max= 686, per=4.31%, avg=331.40, stdev=164.76, samples=20 00:48:43.010 iops : min= 52, max= 171, avg=82.80, stdev=41.15, samples=20 00:48:43.010 lat (msec) : 50=1.80%, 100=20.62%, 250=50.96%, 500=26.62% 00:48:43.010 cpu : usr=33.16%, sys=1.04%, ctx=885, majf=0, minf=9 00:48:43.010 IO depths : 1=1.6%, 2=3.2%, 4=11.0%, 8=72.3%, 16=11.9%, 32=0.0%, >=64=0.0% 00:48:43.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.010 complete : 0=0.0%, 4=90.0%, 8=5.2%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.010 issued rwts: total=834,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:43.010 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:43.010 filename2: (groupid=0, jobs=1): err= 0: pid=108239: Tue Oct 1 20:47:36 2024 00:48:43.010 read: IOPS=77, BW=311KiB/s (319kB/s)(3124KiB/10040msec) 00:48:43.010 slat (usec): min=4, max=8037, avg=26.19, stdev=287.20 00:48:43.010 clat (msec): min=42, max=534, avg=205.19, stdev=115.19 00:48:43.010 lat (msec): min=42, max=534, avg=205.21, stdev=115.18 00:48:43.010 clat percentiles (msec): 00:48:43.010 | 1.00th=[ 43], 5.00th=[ 56], 10.00th=[ 72], 20.00th=[ 86], 00:48:43.010 | 30.00th=[ 109], 40.00th=[ 123], 50.00th=[ 205], 60.00th=[ 247], 00:48:43.010 | 70.00th=[ 292], 80.00th=[ 313], 90.00th=[ 355], 95.00th=[ 376], 00:48:43.010 | 99.00th=[ 464], 99.50th=[ 535], 99.90th=[ 535], 99.95th=[ 535], 00:48:43.010 | 99.99th=[ 535] 00:48:43.010 bw ( KiB/s): min= 128, max= 766, per=3.97%, avg=305.75, stdev=198.50, samples=20 00:48:43.010 iops : min= 32, max= 191, avg=76.40, stdev=49.57, samples=20 00:48:43.010 lat (msec) : 50=3.33%, 100=22.41%, 250=37.77%, 500=35.85%, 750=0.64% 00:48:43.010 cpu : usr=35.38%, sys=1.28%, ctx=1110, majf=0, minf=9 00:48:43.010 IO depths : 1=3.6%, 2=7.8%, 4=19.2%, 8=60.4%, 16=9.0%, 32=0.0%, >=64=0.0% 00:48:43.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.010 complete : 0=0.0%, 4=92.3%, 8=2.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.010 issued rwts: total=781,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:43.011 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:43.011 filename2: (groupid=0, jobs=1): err= 0: pid=108240: Tue Oct 1 20:47:36 2024 00:48:43.011 read: IOPS=89, BW=357KiB/s (366kB/s)(3596KiB/10064msec) 00:48:43.011 slat (usec): min=4, max=4032, avg=17.55, stdev=134.40 00:48:43.011 clat (msec): min=8, max=443, avg=178.73, stdev=110.16 00:48:43.011 lat (msec): min=8, max=443, avg=178.75, stdev=110.15 00:48:43.011 clat percentiles (msec): 00:48:43.011 | 1.00th=[ 9], 5.00th=[ 14], 10.00th=[ 47], 20.00th=[ 68], 00:48:43.011 | 30.00th=[ 96], 40.00th=[ 121], 50.00th=[ 182], 60.00th=[ 220], 00:48:43.011 | 70.00th=[ 251], 80.00th=[ 300], 90.00th=[ 334], 95.00th=[ 338], 00:48:43.011 | 99.00th=[ 384], 99.50th=[ 384], 99.90th=[ 443], 99.95th=[ 443], 00:48:43.011 | 99.99th=[ 443] 00:48:43.011 bw ( KiB/s): min= 128, max= 1015, per=4.58%, avg=352.40, stdev=239.06, samples=20 00:48:43.011 iops : min= 32, max= 253, avg=88.00, stdev=59.60, samples=20 00:48:43.011 lat (msec) : 10=1.78%, 20=5.34%, 50=4.89%, 100=21.47%, 250=35.48% 00:48:43.011 lat (msec) : 500=31.03% 00:48:43.011 cpu : usr=34.90%, sys=1.24%, ctx=1193, majf=0, minf=9 00:48:43.011 IO depths : 1=2.8%, 2=6.3%, 4=15.6%, 8=64.5%, 16=10.8%, 32=0.0%, >=64=0.0% 00:48:43.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.011 complete : 0=0.0%, 4=91.9%, 8=3.5%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.011 issued rwts: total=899,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:43.011 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:43.011 filename2: (groupid=0, jobs=1): err= 0: pid=108241: Tue Oct 1 20:47:36 2024 00:48:43.011 read: IOPS=95, BW=383KiB/s (392kB/s)(3848KiB/10046msec) 00:48:43.011 slat (nsec): min=4761, max=62664, avg=12945.04, stdev=8107.08 00:48:43.011 clat (msec): min=7, max=442, avg=166.79, stdev=109.90 00:48:43.011 lat (msec): min=7, max=442, avg=166.81, stdev=109.90 00:48:43.011 clat percentiles (msec): 00:48:43.011 | 1.00th=[ 9], 5.00th=[ 13], 10.00th=[ 59], 20.00th=[ 71], 00:48:43.011 | 30.00th=[ 78], 40.00th=[ 89], 50.00th=[ 112], 60.00th=[ 226], 00:48:43.011 | 70.00th=[ 241], 80.00th=[ 275], 90.00th=[ 313], 95.00th=[ 334], 00:48:43.011 | 99.00th=[ 430], 99.50th=[ 435], 99.90th=[ 443], 99.95th=[ 443], 00:48:43.011 | 99.99th=[ 443] 00:48:43.011 bw ( KiB/s): min= 128, max= 841, per=4.93%, avg=380.00, stdev=254.44, samples=20 00:48:43.011 iops : min= 32, max= 210, avg=94.95, stdev=63.58, samples=20 00:48:43.011 lat (msec) : 10=1.66%, 20=4.99%, 50=1.66%, 100=37.01%, 250=26.40% 00:48:43.011 lat (msec) : 500=28.27% 00:48:43.011 cpu : usr=32.19%, sys=1.09%, ctx=1022, majf=0, minf=9 00:48:43.011 IO depths : 1=1.7%, 2=4.0%, 4=12.3%, 8=70.6%, 16=11.5%, 32=0.0%, >=64=0.0% 00:48:43.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.011 complete : 0=0.0%, 4=90.7%, 8=4.4%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.011 issued rwts: total=962,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:43.011 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:43.011 filename2: (groupid=0, jobs=1): err= 0: pid=108242: Tue Oct 1 20:47:36 2024 00:48:43.011 read: IOPS=76, BW=305KiB/s (312kB/s)(3060KiB/10033msec) 00:48:43.011 slat (usec): min=5, max=4030, avg=26.02, stdev=201.15 00:48:43.011 clat (msec): min=43, max=415, avg=209.61, stdev=105.34 00:48:43.011 lat (msec): min=43, max=415, avg=209.64, stdev=105.34 00:48:43.011 clat percentiles (msec): 00:48:43.011 | 1.00th=[ 44], 5.00th=[ 71], 10.00th=[ 81], 20.00th=[ 107], 00:48:43.011 | 30.00th=[ 117], 40.00th=[ 142], 50.00th=[ 215], 60.00th=[ 239], 00:48:43.011 | 70.00th=[ 271], 80.00th=[ 321], 90.00th=[ 368], 95.00th=[ 380], 00:48:43.011 | 99.00th=[ 393], 99.50th=[ 397], 99.90th=[ 418], 99.95th=[ 418], 00:48:43.011 | 99.99th=[ 418] 00:48:43.011 bw ( KiB/s): min= 128, max= 640, per=3.89%, avg=299.40, stdev=167.90, samples=20 00:48:43.011 iops : min= 32, max= 160, avg=74.80, stdev=41.92, samples=20 00:48:43.011 lat (msec) : 50=3.27%, 100=11.11%, 250=50.98%, 500=34.64% 00:48:43.011 cpu : usr=39.90%, sys=1.29%, ctx=1197, majf=0, minf=9 00:48:43.011 IO depths : 1=3.4%, 2=8.1%, 4=20.0%, 8=59.2%, 16=9.3%, 32=0.0%, >=64=0.0% 00:48:43.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.011 complete : 0=0.0%, 4=92.8%, 8=1.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.011 issued rwts: total=765,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:43.011 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:43.011 filename2: (groupid=0, jobs=1): err= 0: pid=108243: Tue Oct 1 20:47:36 2024 00:48:43.011 read: IOPS=91, BW=365KiB/s (374kB/s)(3664KiB/10043msec) 00:48:43.011 slat (usec): min=5, max=5048, avg=34.05, stdev=250.42 00:48:43.011 clat (msec): min=32, max=462, avg=175.10, stdev=94.94 00:48:43.011 lat (msec): min=32, max=462, avg=175.13, stdev=94.95 00:48:43.011 clat percentiles (msec): 00:48:43.011 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 72], 00:48:43.011 | 30.00th=[ 96], 40.00th=[ 126], 50.00th=[ 188], 60.00th=[ 215], 00:48:43.011 | 70.00th=[ 241], 80.00th=[ 271], 90.00th=[ 300], 95.00th=[ 326], 00:48:43.011 | 99.00th=[ 351], 99.50th=[ 397], 99.90th=[ 464], 99.95th=[ 464], 00:48:43.011 | 99.99th=[ 464] 00:48:43.011 bw ( KiB/s): min= 176, max= 1040, per=4.67%, avg=359.70, stdev=219.63, samples=20 00:48:43.011 iops : min= 44, max= 260, avg=89.90, stdev=54.87, samples=20 00:48:43.011 lat (msec) : 50=10.37%, 100=23.03%, 250=39.74%, 500=26.86% 00:48:43.011 cpu : usr=41.31%, sys=1.45%, ctx=1262, majf=0, minf=9 00:48:43.011 IO depths : 1=0.8%, 2=1.6%, 4=8.6%, 8=76.4%, 16=12.6%, 32=0.0%, >=64=0.0% 00:48:43.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.011 complete : 0=0.0%, 4=89.6%, 8=5.6%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.011 issued rwts: total=916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:43.011 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:43.011 00:48:43.011 Run status group 0 (all jobs): 00:48:43.011 READ: bw=7688KiB/s (7873kB/s), 273KiB/s-406KiB/s (279kB/s-416kB/s), io=75.6MiB (79.2MB), run=10001-10064msec 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:48:43.011 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:43.012 bdev_null0 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:43.012 [2024-10-01 20:47:36.670352] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:43.012 bdev_null1 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:48:43.012 { 00:48:43.012 "params": { 00:48:43.012 "name": "Nvme$subsystem", 00:48:43.012 "trtype": "$TEST_TRANSPORT", 00:48:43.012 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:43.012 "adrfam": "ipv4", 00:48:43.012 "trsvcid": "$NVMF_PORT", 00:48:43.012 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:43.012 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:43.012 "hdgst": ${hdgst:-false}, 00:48:43.012 "ddgst": ${ddgst:-false} 00:48:43.012 }, 00:48:43.012 "method": "bdev_nvme_attach_controller" 00:48:43.012 } 00:48:43.012 EOF 00:48:43.012 )") 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:48:43.012 { 00:48:43.012 "params": { 00:48:43.012 "name": "Nvme$subsystem", 00:48:43.012 "trtype": "$TEST_TRANSPORT", 00:48:43.012 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:43.012 "adrfam": "ipv4", 00:48:43.012 "trsvcid": "$NVMF_PORT", 00:48:43.012 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:43.012 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:43.012 "hdgst": ${hdgst:-false}, 00:48:43.012 "ddgst": ${ddgst:-false} 00:48:43.012 }, 00:48:43.012 "method": "bdev_nvme_attach_controller" 00:48:43.012 } 00:48:43.012 EOF 00:48:43.012 )") 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:48:43.012 "params": { 00:48:43.012 "name": "Nvme0", 00:48:43.012 "trtype": "tcp", 00:48:43.012 "traddr": "10.0.0.3", 00:48:43.012 "adrfam": "ipv4", 00:48:43.012 "trsvcid": "4420", 00:48:43.012 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:43.012 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:43.012 "hdgst": false, 00:48:43.012 "ddgst": false 00:48:43.012 }, 00:48:43.012 "method": "bdev_nvme_attach_controller" 00:48:43.012 },{ 00:48:43.012 "params": { 00:48:43.012 "name": "Nvme1", 00:48:43.012 "trtype": "tcp", 00:48:43.012 "traddr": "10.0.0.3", 00:48:43.012 "adrfam": "ipv4", 00:48:43.012 "trsvcid": "4420", 00:48:43.012 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:48:43.012 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:48:43.012 "hdgst": false, 00:48:43.012 "ddgst": false 00:48:43.012 }, 00:48:43.012 "method": "bdev_nvme_attach_controller" 00:48:43.012 }' 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:48:43.012 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:48:43.013 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:48:43.013 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:48:43.013 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:48:43.013 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:48:43.013 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:48:43.013 20:47:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:43.013 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:48:43.013 ... 00:48:43.013 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:48:43.013 ... 00:48:43.013 fio-3.35 00:48:43.013 Starting 4 threads 00:48:48.288 00:48:48.288 filename0: (groupid=0, jobs=1): err= 0: pid=108365: Tue Oct 1 20:47:42 2024 00:48:48.288 read: IOPS=1933, BW=15.1MiB/s (15.8MB/s)(75.6MiB/5002msec) 00:48:48.288 slat (nsec): min=5047, max=43941, avg=12641.33, stdev=4939.18 00:48:48.288 clat (usec): min=3106, max=6109, avg=4079.87, stdev=88.18 00:48:48.288 lat (usec): min=3115, max=6130, avg=4092.51, stdev=87.30 00:48:48.288 clat percentiles (usec): 00:48:48.288 | 1.00th=[ 3982], 5.00th=[ 4015], 10.00th=[ 4015], 20.00th=[ 4047], 00:48:48.288 | 30.00th=[ 4047], 40.00th=[ 4080], 50.00th=[ 4080], 60.00th=[ 4080], 00:48:48.288 | 70.00th=[ 4113], 80.00th=[ 4113], 90.00th=[ 4113], 95.00th=[ 4146], 00:48:48.288 | 99.00th=[ 4178], 99.50th=[ 4228], 99.90th=[ 5014], 99.95th=[ 6063], 00:48:48.288 | 99.99th=[ 6128] 00:48:48.288 bw ( KiB/s): min=15232, max=15616, per=25.00%, avg=15473.78, stdev=134.92, samples=9 00:48:48.288 iops : min= 1904, max= 1952, avg=1934.22, stdev=16.87, samples=9 00:48:48.288 lat (msec) : 4=3.84%, 10=96.16% 00:48:48.288 cpu : usr=94.46%, sys=4.40%, ctx=9, majf=0, minf=9 00:48:48.288 IO depths : 1=11.0%, 2=25.0%, 4=50.0%, 8=14.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:48.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:48.288 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:48.288 issued rwts: total=9672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:48.288 latency : target=0, window=0, percentile=100.00%, depth=8 00:48:48.288 filename0: (groupid=0, jobs=1): err= 0: pid=108366: Tue Oct 1 20:47:42 2024 00:48:48.288 read: IOPS=1934, BW=15.1MiB/s (15.8MB/s)(75.6MiB/5001msec) 00:48:48.288 slat (nsec): min=7993, max=43570, avg=16023.19, stdev=3944.64 00:48:48.288 clat (usec): min=3080, max=5474, avg=4056.38, stdev=73.64 00:48:48.288 lat (usec): min=3088, max=5502, avg=4072.40, stdev=74.37 00:48:48.288 clat percentiles (usec): 00:48:48.288 | 1.00th=[ 3982], 5.00th=[ 3982], 10.00th=[ 3982], 20.00th=[ 4015], 00:48:48.288 | 30.00th=[ 4015], 40.00th=[ 4047], 50.00th=[ 4047], 60.00th=[ 4080], 00:48:48.288 | 70.00th=[ 4080], 80.00th=[ 4113], 90.00th=[ 4113], 95.00th=[ 4146], 00:48:48.288 | 99.00th=[ 4178], 99.50th=[ 4228], 99.90th=[ 4817], 99.95th=[ 5407], 00:48:48.288 | 99.99th=[ 5473] 00:48:48.288 bw ( KiB/s): min=15262, max=15616, per=25.01%, avg=15477.11, stdev=128.42, samples=9 00:48:48.288 iops : min= 1907, max= 1952, avg=1934.56, stdev=16.21, samples=9 00:48:48.288 lat (msec) : 4=11.90%, 10=88.10% 00:48:48.288 cpu : usr=94.12%, sys=4.76%, ctx=8, majf=0, minf=10 00:48:48.288 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:48.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:48.288 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:48.288 issued rwts: total=9672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:48.288 latency : target=0, window=0, percentile=100.00%, depth=8 00:48:48.288 filename1: (groupid=0, jobs=1): err= 0: pid=108367: Tue Oct 1 20:47:42 2024 00:48:48.288 read: IOPS=1935, BW=15.1MiB/s (15.9MB/s)(75.6MiB/5001msec) 00:48:48.288 slat (nsec): min=4661, max=49778, avg=11224.81, stdev=4984.19 00:48:48.288 clat (usec): min=1337, max=5254, avg=4077.66, stdev=123.89 00:48:48.288 lat (usec): min=1351, max=5283, avg=4088.88, stdev=123.46 00:48:48.288 clat percentiles (usec): 00:48:48.288 | 1.00th=[ 3949], 5.00th=[ 4015], 10.00th=[ 4047], 20.00th=[ 4047], 00:48:48.288 | 30.00th=[ 4047], 40.00th=[ 4080], 50.00th=[ 4080], 60.00th=[ 4080], 00:48:48.288 | 70.00th=[ 4113], 80.00th=[ 4113], 90.00th=[ 4113], 95.00th=[ 4146], 00:48:48.288 | 99.00th=[ 4228], 99.50th=[ 4228], 99.90th=[ 4883], 99.95th=[ 5211], 00:48:48.288 | 99.99th=[ 5276] 00:48:48.288 bw ( KiB/s): min=15488, max=15488, per=25.03%, avg=15488.00, stdev= 0.00, samples=9 00:48:48.288 iops : min= 1936, max= 1936, avg=1936.00, stdev= 0.00, samples=9 00:48:48.288 lat (msec) : 2=0.17%, 4=3.85%, 10=95.98% 00:48:48.289 cpu : usr=93.80%, sys=4.96%, ctx=13, majf=0, minf=0 00:48:48.289 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:48.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:48.289 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:48.289 issued rwts: total=9680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:48.289 latency : target=0, window=0, percentile=100.00%, depth=8 00:48:48.289 filename1: (groupid=0, jobs=1): err= 0: pid=108368: Tue Oct 1 20:47:42 2024 00:48:48.289 read: IOPS=1933, BW=15.1MiB/s (15.8MB/s)(75.6MiB/5002msec) 00:48:48.289 slat (nsec): min=4909, max=58099, avg=16767.14, stdev=4179.58 00:48:48.289 clat (usec): min=3082, max=6022, avg=4053.98, stdev=85.60 00:48:48.289 lat (usec): min=3096, max=6038, avg=4070.75, stdev=85.95 00:48:48.289 clat percentiles (usec): 00:48:48.289 | 1.00th=[ 3949], 5.00th=[ 3982], 10.00th=[ 3982], 20.00th=[ 4015], 00:48:48.289 | 30.00th=[ 4015], 40.00th=[ 4047], 50.00th=[ 4047], 60.00th=[ 4080], 00:48:48.289 | 70.00th=[ 4080], 80.00th=[ 4080], 90.00th=[ 4113], 95.00th=[ 4146], 00:48:48.289 | 99.00th=[ 4178], 99.50th=[ 4228], 99.90th=[ 4883], 99.95th=[ 5997], 00:48:48.289 | 99.99th=[ 5997] 00:48:48.289 bw ( KiB/s): min=15262, max=15616, per=25.01%, avg=15477.11, stdev=128.42, samples=9 00:48:48.289 iops : min= 1907, max= 1952, avg=1934.56, stdev=16.21, samples=9 00:48:48.289 lat (msec) : 4=14.86%, 10=85.14% 00:48:48.289 cpu : usr=93.98%, sys=4.78%, ctx=18, majf=0, minf=9 00:48:48.289 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:48.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:48.289 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:48.289 issued rwts: total=9672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:48.289 latency : target=0, window=0, percentile=100.00%, depth=8 00:48:48.289 00:48:48.289 Run status group 0 (all jobs): 00:48:48.289 READ: bw=60.4MiB/s (63.4MB/s), 15.1MiB/s-15.1MiB/s (15.8MB/s-15.9MB/s), io=302MiB (317MB), run=5001-5002msec 00:48:48.289 20:47:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:48:48.289 20:47:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:48:48.289 20:47:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:48:48.289 20:47:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:48:48.289 20:47:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:48:48.289 20:47:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:48:48.289 20:47:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:48.289 20:47:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:48.289 20:47:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:48.289 20:47:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:48:48.289 20:47:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:48.289 20:47:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:48.289 20:47:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:48.289 20:47:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:48:48.289 20:47:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:48:48.289 20:47:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:48:48.289 20:47:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:48:48.289 20:47:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:48.289 20:47:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:48.289 20:47:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:48.289 20:47:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:48:48.289 20:47:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:48.289 20:47:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:48.289 ************************************ 00:48:48.289 END TEST fio_dif_rand_params 00:48:48.289 ************************************ 00:48:48.289 20:47:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:48.289 00:48:48.289 real 0m23.858s 00:48:48.289 user 2m4.983s 00:48:48.289 sys 0m5.705s 00:48:48.289 20:47:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:48:48.289 20:47:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:48.289 20:47:42 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:48:48.289 20:47:42 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:48:48.289 20:47:42 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:48:48.289 20:47:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:48:48.289 ************************************ 00:48:48.289 START TEST fio_dif_digest 00:48:48.289 ************************************ 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:48:48.289 bdev_null0 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:48:48.289 [2024-10-01 20:47:42.915180] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config=() 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # local subsystem config 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:48:48.289 { 00:48:48.289 "params": { 00:48:48.289 "name": "Nvme$subsystem", 00:48:48.289 "trtype": "$TEST_TRANSPORT", 00:48:48.289 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:48.289 "adrfam": "ipv4", 00:48:48.289 "trsvcid": "$NVMF_PORT", 00:48:48.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:48.289 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:48.289 "hdgst": ${hdgst:-false}, 00:48:48.289 "ddgst": ${ddgst:-false} 00:48:48.289 }, 00:48:48.289 "method": "bdev_nvme_attach_controller" 00:48:48.289 } 00:48:48.289 EOF 00:48:48.289 )") 00:48:48.289 20:47:42 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:48:48.290 20:47:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:48:48.290 20:47:42 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:48:48.290 20:47:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:48:48.290 20:47:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:48:48.290 20:47:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:48:48.290 20:47:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:48:48.290 20:47:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:48:48.290 20:47:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:48:48.290 20:47:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # cat 00:48:48.290 20:47:42 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:48:48.290 20:47:42 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:48:48.290 20:47:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:48:48.290 20:47:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:48:48.290 20:47:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:48:48.290 20:47:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # jq . 00:48:48.290 20:47:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@583 -- # IFS=, 00:48:48.290 20:47:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:48:48.290 "params": { 00:48:48.290 "name": "Nvme0", 00:48:48.290 "trtype": "tcp", 00:48:48.290 "traddr": "10.0.0.3", 00:48:48.290 "adrfam": "ipv4", 00:48:48.290 "trsvcid": "4420", 00:48:48.290 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:48.290 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:48.290 "hdgst": true, 00:48:48.290 "ddgst": true 00:48:48.290 }, 00:48:48.290 "method": "bdev_nvme_attach_controller" 00:48:48.290 }' 00:48:48.290 20:47:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:48:48.290 20:47:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:48:48.290 20:47:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:48:48.290 20:47:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:48:48.290 20:47:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:48:48.290 20:47:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:48:48.290 20:47:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:48:48.290 20:47:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:48:48.290 20:47:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:48:48.290 20:47:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:48.290 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:48:48.290 ... 00:48:48.290 fio-3.35 00:48:48.290 Starting 3 threads 00:49:00.485 00:49:00.485 filename0: (groupid=0, jobs=1): err= 0: pid=108473: Tue Oct 1 20:47:53 2024 00:49:00.485 read: IOPS=175, BW=21.9MiB/s (23.0MB/s)(219MiB/10005msec) 00:49:00.485 slat (nsec): min=8154, max=52029, avg=14200.89, stdev=4053.97 00:49:00.485 clat (usec): min=9097, max=19957, avg=17090.25, stdev=1525.16 00:49:00.485 lat (usec): min=9110, max=19977, avg=17104.45, stdev=1525.49 00:49:00.485 clat percentiles (usec): 00:49:00.485 | 1.00th=[10421], 5.00th=[14222], 10.00th=[16188], 20.00th=[16581], 00:49:00.485 | 30.00th=[16909], 40.00th=[17171], 50.00th=[17171], 60.00th=[17433], 00:49:00.485 | 70.00th=[17695], 80.00th=[17957], 90.00th=[18482], 95.00th=[18744], 00:49:00.485 | 99.00th=[19268], 99.50th=[19530], 99.90th=[19792], 99.95th=[20055], 00:49:00.485 | 99.99th=[20055] 00:49:00.485 bw ( KiB/s): min=21760, max=23040, per=28.03%, avg=22460.63, stdev=503.70, samples=19 00:49:00.485 iops : min= 170, max= 180, avg=175.47, stdev= 3.94, samples=19 00:49:00.485 lat (msec) : 10=0.23%, 20=99.77% 00:49:00.485 cpu : usr=93.36%, sys=5.38%, ctx=34, majf=0, minf=0 00:49:00.485 IO depths : 1=7.4%, 2=92.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:49:00.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:49:00.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:49:00.486 issued rwts: total=1754,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:49:00.486 latency : target=0, window=0, percentile=100.00%, depth=3 00:49:00.486 filename0: (groupid=0, jobs=1): err= 0: pid=108474: Tue Oct 1 20:47:53 2024 00:49:00.486 read: IOPS=210, BW=26.4MiB/s (27.7MB/s)(264MiB/10005msec) 00:49:00.486 slat (nsec): min=7538, max=40791, avg=13287.71, stdev=3206.55 00:49:00.486 clat (usec): min=7131, max=17554, avg=14199.30, stdev=1555.52 00:49:00.486 lat (usec): min=7143, max=17563, avg=14212.58, stdev=1555.50 00:49:00.486 clat percentiles (usec): 00:49:00.486 | 1.00th=[ 8455], 5.00th=[11863], 10.00th=[12780], 20.00th=[13435], 00:49:00.486 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14353], 60.00th=[14615], 00:49:00.486 | 70.00th=[15008], 80.00th=[15270], 90.00th=[15795], 95.00th=[16188], 00:49:00.486 | 99.00th=[16909], 99.50th=[17171], 99.90th=[17433], 99.95th=[17433], 00:49:00.486 | 99.99th=[17433] 00:49:00.486 bw ( KiB/s): min=25600, max=28672, per=33.75%, avg=27041.68, stdev=883.56, samples=19 00:49:00.486 iops : min= 200, max= 224, avg=211.26, stdev= 6.90, samples=19 00:49:00.486 lat (msec) : 10=4.26%, 20=95.74% 00:49:00.486 cpu : usr=93.38%, sys=5.35%, ctx=7, majf=0, minf=0 00:49:00.486 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:49:00.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:49:00.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:49:00.486 issued rwts: total=2111,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:49:00.486 latency : target=0, window=0, percentile=100.00%, depth=3 00:49:00.486 filename0: (groupid=0, jobs=1): err= 0: pid=108475: Tue Oct 1 20:47:53 2024 00:49:00.486 read: IOPS=239, BW=30.0MiB/s (31.4MB/s)(300MiB/10010msec) 00:49:00.486 slat (nsec): min=4933, max=59980, avg=13834.27, stdev=3424.07 00:49:00.486 clat (usec): min=8163, max=54697, avg=12488.55, stdev=3857.22 00:49:00.486 lat (usec): min=8168, max=54710, avg=12502.39, stdev=3857.25 00:49:00.486 clat percentiles (usec): 00:49:00.486 | 1.00th=[10028], 5.00th=[10683], 10.00th=[11076], 20.00th=[11469], 00:49:00.486 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12125], 60.00th=[12387], 00:49:00.486 | 70.00th=[12649], 80.00th=[12780], 90.00th=[13173], 95.00th=[13566], 00:49:00.486 | 99.00th=[14353], 99.50th=[52167], 99.90th=[53216], 99.95th=[54789], 00:49:00.486 | 99.99th=[54789] 00:49:00.486 bw ( KiB/s): min=28672, max=32256, per=38.26%, avg=30652.63, stdev=1230.23, samples=19 00:49:00.486 iops : min= 224, max= 252, avg=239.47, stdev= 9.61, samples=19 00:49:00.486 lat (msec) : 10=0.71%, 20=98.42%, 100=0.87% 00:49:00.486 cpu : usr=92.93%, sys=5.62%, ctx=15, majf=0, minf=0 00:49:00.486 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:49:00.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:49:00.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:49:00.486 issued rwts: total=2401,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:49:00.486 latency : target=0, window=0, percentile=100.00%, depth=3 00:49:00.486 00:49:00.486 Run status group 0 (all jobs): 00:49:00.486 READ: bw=78.2MiB/s (82.0MB/s), 21.9MiB/s-30.0MiB/s (23.0MB/s-31.4MB/s), io=783MiB (821MB), run=10005-10010msec 00:49:00.486 20:47:53 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:49:00.486 20:47:53 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:49:00.486 20:47:53 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:49:00.486 20:47:53 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:49:00.486 20:47:53 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:49:00.486 20:47:53 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:49:00.486 20:47:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:49:00.486 20:47:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:49:00.486 20:47:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:49:00.486 20:47:53 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:49:00.486 20:47:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:49:00.486 20:47:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:49:00.486 ************************************ 00:49:00.486 END TEST fio_dif_digest 00:49:00.486 ************************************ 00:49:00.486 20:47:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:49:00.486 00:49:00.486 real 0m11.034s 00:49:00.486 user 0m28.644s 00:49:00.486 sys 0m1.933s 00:49:00.486 20:47:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:49:00.486 20:47:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:49:00.486 20:47:53 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:49:00.486 20:47:53 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:49:00.486 20:47:53 nvmf_dif -- nvmf/common.sh@514 -- # nvmfcleanup 00:49:00.486 20:47:53 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:49:00.486 20:47:53 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:49:00.486 20:47:53 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:49:00.486 20:47:53 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:49:00.486 20:47:53 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:49:00.486 rmmod nvme_tcp 00:49:00.486 rmmod nvme_fabrics 00:49:00.486 rmmod nvme_keyring 00:49:00.486 20:47:54 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:49:00.486 20:47:54 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:49:00.486 20:47:54 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:49:00.486 20:47:54 nvmf_dif -- nvmf/common.sh@515 -- # '[' -n 107749 ']' 00:49:00.486 20:47:54 nvmf_dif -- nvmf/common.sh@516 -- # killprocess 107749 00:49:00.486 20:47:54 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 107749 ']' 00:49:00.486 20:47:54 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 107749 00:49:00.486 20:47:54 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:49:00.486 20:47:54 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:49:00.486 20:47:54 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 107749 00:49:00.486 killing process with pid 107749 00:49:00.486 20:47:54 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:49:00.486 20:47:54 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:49:00.486 20:47:54 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 107749' 00:49:00.486 20:47:54 nvmf_dif -- common/autotest_common.sh@969 -- # kill 107749 00:49:00.486 20:47:54 nvmf_dif -- common/autotest_common.sh@974 -- # wait 107749 00:49:00.486 20:47:54 nvmf_dif -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:49:00.486 20:47:54 nvmf_dif -- nvmf/common.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:49:00.486 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:49:00.486 Waiting for block devices as requested 00:49:00.486 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:49:00.486 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:49:00.486 20:47:54 nvmf_dif -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:49:00.486 20:47:54 nvmf_dif -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:49:00.486 20:47:54 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:49:00.486 20:47:54 nvmf_dif -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:49:00.486 20:47:54 nvmf_dif -- nvmf/common.sh@789 -- # iptables-save 00:49:00.486 20:47:54 nvmf_dif -- nvmf/common.sh@789 -- # iptables-restore 00:49:00.486 20:47:54 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:49:00.486 20:47:54 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:49:00.486 20:47:54 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:49:00.486 20:47:54 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:49:00.486 20:47:54 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:49:00.486 20:47:54 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:49:00.487 20:47:54 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:49:00.487 20:47:54 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:49:00.487 20:47:54 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:49:00.487 20:47:54 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:49:00.487 20:47:54 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:49:00.487 20:47:54 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:49:00.487 20:47:54 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:49:00.487 20:47:55 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:49:00.487 20:47:55 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:49:00.487 20:47:55 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:49:00.487 20:47:55 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:00.487 20:47:55 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:49:00.487 20:47:55 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:00.487 20:47:55 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:49:00.487 00:49:00.487 real 1m0.028s 00:49:00.487 user 3m51.504s 00:49:00.487 sys 0m14.840s 00:49:00.487 ************************************ 00:49:00.487 END TEST nvmf_dif 00:49:00.487 ************************************ 00:49:00.487 20:47:55 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:49:00.487 20:47:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:49:00.487 20:47:55 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:49:00.487 20:47:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:49:00.487 20:47:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:49:00.487 20:47:55 -- common/autotest_common.sh@10 -- # set +x 00:49:00.487 ************************************ 00:49:00.487 START TEST nvmf_abort_qd_sizes 00:49:00.487 ************************************ 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:49:00.487 * Looking for test storage... 00:49:00.487 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lcov --version 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:49:00.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:00.487 --rc genhtml_branch_coverage=1 00:49:00.487 --rc genhtml_function_coverage=1 00:49:00.487 --rc genhtml_legend=1 00:49:00.487 --rc geninfo_all_blocks=1 00:49:00.487 --rc geninfo_unexecuted_blocks=1 00:49:00.487 00:49:00.487 ' 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:49:00.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:00.487 --rc genhtml_branch_coverage=1 00:49:00.487 --rc genhtml_function_coverage=1 00:49:00.487 --rc genhtml_legend=1 00:49:00.487 --rc geninfo_all_blocks=1 00:49:00.487 --rc geninfo_unexecuted_blocks=1 00:49:00.487 00:49:00.487 ' 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:49:00.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:00.487 --rc genhtml_branch_coverage=1 00:49:00.487 --rc genhtml_function_coverage=1 00:49:00.487 --rc genhtml_legend=1 00:49:00.487 --rc geninfo_all_blocks=1 00:49:00.487 --rc geninfo_unexecuted_blocks=1 00:49:00.487 00:49:00.487 ' 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:49:00.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:00.487 --rc genhtml_branch_coverage=1 00:49:00.487 --rc genhtml_function_coverage=1 00:49:00.487 --rc genhtml_legend=1 00:49:00.487 --rc geninfo_all_blocks=1 00:49:00.487 --rc geninfo_unexecuted_blocks=1 00:49:00.487 00:49:00.487 ' 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:49:00.487 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:49:00.488 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # prepare_net_devs 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # local -g is_hw=no 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # remove_spdk_ns 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@458 -- # nvmf_veth_init 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:49:00.488 Cannot find device "nvmf_init_br" 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:49:00.488 Cannot find device "nvmf_init_br2" 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:49:00.488 Cannot find device "nvmf_tgt_br" 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:49:00.488 Cannot find device "nvmf_tgt_br2" 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:49:00.488 Cannot find device "nvmf_init_br" 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:49:00.488 Cannot find device "nvmf_init_br2" 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:49:00.488 Cannot find device "nvmf_tgt_br" 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:49:00.488 Cannot find device "nvmf_tgt_br2" 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:49:00.488 Cannot find device "nvmf_br" 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:49:00.488 Cannot find device "nvmf_init_if" 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:49:00.488 Cannot find device "nvmf_init_if2" 00:49:00.488 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:49:00.489 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:49:00.489 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:49:00.489 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:49:00.489 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:49:00.489 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:49:00.489 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:49:00.489 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:49:00.489 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:49:00.489 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:49:00.489 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:49:00.489 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:49:00.489 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:49:00.489 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:49:00.489 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:49:00.489 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:49:00.489 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:49:00.489 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:49:00.489 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:49:00.489 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:49:00.489 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:49:00.489 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:49:00.489 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:49:00.489 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:49:00.489 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:49:00.489 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:49:00.489 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:49:00.489 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:49:00.489 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:49:00.489 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:49:00.489 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:49:00.489 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:49:00.489 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:49:00.747 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:49:00.747 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:49:00.747 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:49:00.747 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:49:00.747 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:49:00.747 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:49:00.747 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:49:00.747 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:49:00.747 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:49:00.747 00:49:00.747 --- 10.0.0.3 ping statistics --- 00:49:00.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:00.747 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:49:00.747 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:49:00.747 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:49:00.747 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.069 ms 00:49:00.747 00:49:00.747 --- 10.0.0.4 ping statistics --- 00:49:00.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:00.747 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:49:00.747 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:49:00.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:49:00.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:49:00.747 00:49:00.747 --- 10.0.0.1 ping statistics --- 00:49:00.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:00.747 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:49:00.747 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:49:00.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:49:00.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:49:00.747 00:49:00.747 --- 10.0.0.2 ping statistics --- 00:49:00.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:00.747 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:49:00.747 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:49:00.747 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # return 0 00:49:00.747 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:49:00.747 20:47:55 nvmf_abort_qd_sizes -- nvmf/common.sh@477 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:49:01.314 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:49:01.314 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:49:01.314 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:49:01.573 20:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:49:01.573 20:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:49:01.573 20:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:49:01.573 20:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:49:01.573 20:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:49:01.573 20:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:49:01.573 20:47:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:49:01.573 20:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:49:01.573 20:47:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:49:01.573 20:47:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:49:01.573 20:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # nvmfpid=109113 00:49:01.573 20:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # waitforlisten 109113 00:49:01.573 20:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:49:01.573 20:47:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 109113 ']' 00:49:01.573 20:47:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:01.573 20:47:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:49:01.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:01.573 20:47:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:01.573 20:47:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:49:01.573 20:47:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:49:01.573 [2024-10-01 20:47:56.704184] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:49:01.573 [2024-10-01 20:47:56.704516] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:49:01.868 [2024-10-01 20:47:56.844467] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:49:01.868 [2024-10-01 20:47:56.922457] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:49:01.868 [2024-10-01 20:47:56.922518] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:49:01.868 [2024-10-01 20:47:56.922531] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:49:01.868 [2024-10-01 20:47:56.922539] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:49:01.868 [2024-10-01 20:47:56.922546] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:49:01.868 [2024-10-01 20:47:56.922652] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:49:01.868 [2024-10-01 20:47:56.922737] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:49:01.868 [2024-10-01 20:47:56.923367] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:49:01.868 [2024-10-01 20:47:56.923392] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:49:01.868 20:47:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:49:01.868 20:47:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:49:01.868 20:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:49:01.868 20:47:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:49:01.868 20:47:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:49:02.150 20:47:57 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:49:02.150 20:47:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:49:02.150 20:47:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:49:02.150 20:47:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:49:02.150 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:49:02.150 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:49:02.150 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:49:02.150 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:49:02.150 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:49:02.150 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:49:02.150 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:49:02.150 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:49:02.150 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:49:02.150 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:49:02.150 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:49:02.150 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:49:02.150 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:49:02.150 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:49:02.150 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:49:02.150 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:49:02.150 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:49:02.150 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:49:02.150 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:49:02.150 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:49:02.151 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:49:02.151 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:49:02.151 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:49:02.151 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:49:02.151 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:49:02.151 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:49:02.151 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:49:02.151 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:49:02.151 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:49:02.151 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:49:02.151 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:49:02.151 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:49:02.151 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:49:02.151 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:49:02.151 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:49:02.151 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:49:02.151 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:49:02.151 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:49:02.151 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:49:02.151 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:49:02.151 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:49:02.151 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:49:02.151 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:49:02.151 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:49:02.151 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:49:02.151 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:49:02.151 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:49:02.151 20:47:57 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:49:02.151 20:47:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:49:02.151 20:47:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:49:02.151 20:47:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:49:02.151 20:47:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:49:02.151 20:47:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:49:02.151 20:47:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:49:02.151 ************************************ 00:49:02.151 START TEST spdk_target_abort 00:49:02.151 ************************************ 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:49:02.151 spdk_targetn1 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:49:02.151 [2024-10-01 20:47:57.244290] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:49:02.151 [2024-10-01 20:47:57.280546] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:49:02.151 20:47:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:49:05.439 Initializing NVMe Controllers 00:49:05.439 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:49:05.439 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:49:05.439 Initialization complete. Launching workers. 00:49:05.439 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11466, failed: 0 00:49:05.439 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1020, failed to submit 10446 00:49:05.439 success 0, unsuccessful 1020, failed 0 00:49:05.439 20:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:49:05.439 20:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:49:08.724 Initializing NVMe Controllers 00:49:08.724 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:49:08.724 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:49:08.724 Initialization complete. Launching workers. 00:49:08.724 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5881, failed: 0 00:49:08.724 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1230, failed to submit 4651 00:49:08.724 success 0, unsuccessful 1230, failed 0 00:49:08.724 20:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:49:08.724 20:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:49:12.009 Initializing NVMe Controllers 00:49:12.009 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:49:12.009 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:49:12.009 Initialization complete. Launching workers. 00:49:12.009 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29947, failed: 0 00:49:12.009 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2627, failed to submit 27320 00:49:12.009 success 0, unsuccessful 2627, failed 0 00:49:12.009 20:48:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:49:12.009 20:48:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:49:12.009 20:48:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:49:12.009 20:48:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:49:12.009 20:48:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:49:12.009 20:48:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:49:12.009 20:48:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:49:12.946 20:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:49:12.946 20:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 109113 00:49:12.946 20:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 109113 ']' 00:49:12.946 20:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 109113 00:49:12.946 20:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:49:12.946 20:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:49:12.946 20:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 109113 00:49:12.946 20:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:49:12.946 20:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:49:12.946 killing process with pid 109113 00:49:12.946 20:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 109113' 00:49:12.946 20:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 109113 00:49:12.946 20:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 109113 00:49:13.205 00:49:13.205 real 0m11.192s 00:49:13.205 user 0m43.072s 00:49:13.205 sys 0m1.688s 00:49:13.205 20:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:49:13.205 20:48:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:49:13.205 ************************************ 00:49:13.205 END TEST spdk_target_abort 00:49:13.205 ************************************ 00:49:13.205 20:48:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:49:13.205 20:48:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:49:13.205 20:48:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:49:13.205 20:48:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:49:13.205 ************************************ 00:49:13.205 START TEST kernel_target_abort 00:49:13.205 ************************************ 00:49:13.205 20:48:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:49:13.206 20:48:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:49:13.206 20:48:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # local ip 00:49:13.206 20:48:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates=() 00:49:13.206 20:48:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # local -A ip_candidates 00:49:13.206 20:48:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:13.206 20:48:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:13.206 20:48:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:49:13.206 20:48:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:13.206 20:48:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:49:13.206 20:48:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:49:13.206 20:48:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:49:13.206 20:48:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:49:13.206 20:48:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:49:13.206 20:48:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:49:13.206 20:48:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:49:13.206 20:48:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:49:13.206 20:48:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:49:13.206 20:48:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # local block nvme 00:49:13.206 20:48:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:49:13.206 20:48:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # modprobe nvmet 00:49:13.206 20:48:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:49:13.206 20:48:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:49:13.774 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:49:13.775 Waiting for block devices as requested 00:49:13.775 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:49:13.775 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:49:14.033 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:49:14.033 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:49:14.033 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:49:14.033 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:49:14.033 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:49:14.033 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:49:14.033 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:49:14.033 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:49:14.033 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:49:14.033 No valid GPT data, bailing 00:49:14.033 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:49:14.033 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:49:14.033 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:49:14.033 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:49:14.033 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:49:14.033 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n2 ]] 00:49:14.033 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n2 00:49:14.033 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:49:14.033 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:49:14.033 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:49:14.033 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n2 00:49:14.033 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:49:14.033 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:49:14.033 No valid GPT data, bailing 00:49:14.033 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:49:14.033 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:49:14.033 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:49:14.033 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n2 00:49:14.033 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:49:14.033 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n3 ]] 00:49:14.033 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n3 00:49:14.033 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:49:14.033 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:49:14.033 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:49:14.033 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n3 00:49:14.033 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:49:14.033 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:49:14.033 No valid GPT data, bailing 00:49:14.033 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n3 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme1n1 ]] 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme1n1 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme1n1 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:49:14.292 No valid GPT data, bailing 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme1n1 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # [[ -b /dev/nvme1n1 ]] 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # echo /dev/nvme1n1 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo tcp 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 4420 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo ipv4 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 --hostid=e1b29161-2f6e-4e88-9903-c5274a848703 -a 10.0.0.1 -t tcp -s 4420 00:49:14.292 00:49:14.292 Discovery Log Number of Records 2, Generation counter 2 00:49:14.292 =====Discovery Log Entry 0====== 00:49:14.292 trtype: tcp 00:49:14.292 adrfam: ipv4 00:49:14.292 subtype: current discovery subsystem 00:49:14.292 treq: not specified, sq flow control disable supported 00:49:14.292 portid: 1 00:49:14.292 trsvcid: 4420 00:49:14.292 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:49:14.292 traddr: 10.0.0.1 00:49:14.292 eflags: none 00:49:14.292 sectype: none 00:49:14.292 =====Discovery Log Entry 1====== 00:49:14.292 trtype: tcp 00:49:14.292 adrfam: ipv4 00:49:14.292 subtype: nvme subsystem 00:49:14.292 treq: not specified, sq flow control disable supported 00:49:14.292 portid: 1 00:49:14.292 trsvcid: 4420 00:49:14.292 subnqn: nqn.2016-06.io.spdk:testnqn 00:49:14.292 traddr: 10.0.0.1 00:49:14.292 eflags: none 00:49:14.292 sectype: none 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:49:14.292 20:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:49:17.574 Initializing NVMe Controllers 00:49:17.574 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:49:17.574 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:49:17.574 Initialization complete. Launching workers. 00:49:17.574 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33108, failed: 0 00:49:17.574 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33108, failed to submit 0 00:49:17.574 success 0, unsuccessful 33108, failed 0 00:49:17.574 20:48:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:49:17.574 20:48:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:49:20.900 Initializing NVMe Controllers 00:49:20.900 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:49:20.900 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:49:20.900 Initialization complete. Launching workers. 00:49:20.900 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 69314, failed: 0 00:49:20.900 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30357, failed to submit 38957 00:49:20.900 success 0, unsuccessful 30357, failed 0 00:49:20.900 20:48:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:49:20.900 20:48:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:49:24.191 Initializing NVMe Controllers 00:49:24.191 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:49:24.191 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:49:24.191 Initialization complete. Launching workers. 00:49:24.191 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 79341, failed: 0 00:49:24.191 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19802, failed to submit 59539 00:49:24.191 success 0, unsuccessful 19802, failed 0 00:49:24.191 20:48:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:49:24.191 20:48:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:49:24.191 20:48:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # echo 0 00:49:24.191 20:48:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:49:24.191 20:48:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:49:24.191 20:48:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:49:24.191 20:48:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:49:24.191 20:48:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:49:24.191 20:48:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:49:24.191 20:48:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@724 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:49:24.449 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:49:26.351 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:49:26.351 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:49:26.351 ************************************ 00:49:26.351 END TEST kernel_target_abort 00:49:26.351 ************************************ 00:49:26.351 00:49:26.351 real 0m13.185s 00:49:26.351 user 0m6.558s 00:49:26.351 sys 0m4.117s 00:49:26.351 20:48:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:49:26.351 20:48:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:49:26.610 20:48:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:49:26.610 20:48:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:49:26.610 20:48:21 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # nvmfcleanup 00:49:26.610 20:48:21 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:49:26.610 20:48:21 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:49:26.610 20:48:21 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:49:26.610 20:48:21 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:49:26.610 20:48:21 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:49:26.610 rmmod nvme_tcp 00:49:26.610 rmmod nvme_fabrics 00:49:26.610 rmmod nvme_keyring 00:49:26.610 20:48:21 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:49:26.610 20:48:21 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:49:26.610 20:48:21 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:49:26.610 20:48:21 nvmf_abort_qd_sizes -- nvmf/common.sh@515 -- # '[' -n 109113 ']' 00:49:26.610 20:48:21 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # killprocess 109113 00:49:26.610 20:48:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 109113 ']' 00:49:26.610 20:48:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 109113 00:49:26.610 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (109113) - No such process 00:49:26.610 Process with pid 109113 is not found 00:49:26.610 20:48:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 109113 is not found' 00:49:26.610 20:48:21 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:49:26.610 20:48:21 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:49:26.868 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:49:26.868 Waiting for block devices as requested 00:49:27.130 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:49:27.130 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:49:27.130 20:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:49:27.130 20:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:49:27.130 20:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:49:27.130 20:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-save 00:49:27.130 20:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:49:27.130 20:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-restore 00:49:27.130 20:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:49:27.130 20:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:49:27.130 20:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:49:27.130 20:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:49:27.130 20:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:49:27.131 20:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:49:27.395 20:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:49:27.395 20:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:49:27.395 20:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:49:27.395 20:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:49:27.395 20:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:49:27.395 20:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:49:27.395 20:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:49:27.395 20:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:49:27.395 20:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:49:27.395 20:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:49:27.395 20:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:27.395 20:48:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:49:27.395 20:48:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:27.395 20:48:22 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:49:27.395 00:49:27.395 real 0m27.390s 00:49:27.395 user 0m50.772s 00:49:27.395 sys 0m7.184s 00:49:27.395 20:48:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:49:27.395 ************************************ 00:49:27.395 END TEST nvmf_abort_qd_sizes 00:49:27.395 20:48:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:49:27.395 ************************************ 00:49:27.395 20:48:22 -- spdk/autotest.sh@288 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:49:27.395 20:48:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:49:27.395 20:48:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:49:27.395 20:48:22 -- common/autotest_common.sh@10 -- # set +x 00:49:27.395 ************************************ 00:49:27.395 START TEST keyring_file 00:49:27.395 ************************************ 00:49:27.395 20:48:22 keyring_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:49:27.655 * Looking for test storage... 00:49:27.655 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:49:27.655 20:48:22 keyring_file -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:49:27.655 20:48:22 keyring_file -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:49:27.655 20:48:22 keyring_file -- common/autotest_common.sh@1681 -- # lcov --version 00:49:27.655 20:48:22 keyring_file -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:49:27.655 20:48:22 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:49:27.655 20:48:22 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:49:27.655 20:48:22 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:49:27.655 20:48:22 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:49:27.655 20:48:22 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:49:27.655 20:48:22 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:49:27.655 20:48:22 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:49:27.655 20:48:22 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:49:27.655 20:48:22 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:49:27.655 20:48:22 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:49:27.655 20:48:22 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:49:27.655 20:48:22 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:49:27.655 20:48:22 keyring_file -- scripts/common.sh@345 -- # : 1 00:49:27.655 20:48:22 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:49:27.655 20:48:22 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:49:27.655 20:48:22 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:49:27.655 20:48:22 keyring_file -- scripts/common.sh@353 -- # local d=1 00:49:27.655 20:48:22 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:49:27.655 20:48:22 keyring_file -- scripts/common.sh@355 -- # echo 1 00:49:27.655 20:48:22 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:49:27.655 20:48:22 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:49:27.655 20:48:22 keyring_file -- scripts/common.sh@353 -- # local d=2 00:49:27.655 20:48:22 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:49:27.655 20:48:22 keyring_file -- scripts/common.sh@355 -- # echo 2 00:49:27.655 20:48:22 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:49:27.655 20:48:22 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:49:27.655 20:48:22 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:49:27.655 20:48:22 keyring_file -- scripts/common.sh@368 -- # return 0 00:49:27.655 20:48:22 keyring_file -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:49:27.655 20:48:22 keyring_file -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:49:27.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:27.655 --rc genhtml_branch_coverage=1 00:49:27.655 --rc genhtml_function_coverage=1 00:49:27.655 --rc genhtml_legend=1 00:49:27.655 --rc geninfo_all_blocks=1 00:49:27.655 --rc geninfo_unexecuted_blocks=1 00:49:27.655 00:49:27.655 ' 00:49:27.655 20:48:22 keyring_file -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:49:27.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:27.655 --rc genhtml_branch_coverage=1 00:49:27.655 --rc genhtml_function_coverage=1 00:49:27.655 --rc genhtml_legend=1 00:49:27.655 --rc geninfo_all_blocks=1 00:49:27.655 --rc geninfo_unexecuted_blocks=1 00:49:27.655 00:49:27.655 ' 00:49:27.655 20:48:22 keyring_file -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:49:27.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:27.655 --rc genhtml_branch_coverage=1 00:49:27.655 --rc genhtml_function_coverage=1 00:49:27.655 --rc genhtml_legend=1 00:49:27.655 --rc geninfo_all_blocks=1 00:49:27.655 --rc geninfo_unexecuted_blocks=1 00:49:27.655 00:49:27.655 ' 00:49:27.655 20:48:22 keyring_file -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:49:27.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:27.655 --rc genhtml_branch_coverage=1 00:49:27.655 --rc genhtml_function_coverage=1 00:49:27.655 --rc genhtml_legend=1 00:49:27.655 --rc geninfo_all_blocks=1 00:49:27.655 --rc geninfo_unexecuted_blocks=1 00:49:27.655 00:49:27.655 ' 00:49:27.655 20:48:22 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:49:27.655 20:48:22 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:49:27.655 20:48:22 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:49:27.655 20:48:22 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:49:27.655 20:48:22 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:49:27.655 20:48:22 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:49:27.655 20:48:22 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:49:27.655 20:48:22 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:49:27.655 20:48:22 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:49:27.655 20:48:22 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:49:27.655 20:48:22 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:49:27.655 20:48:22 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:49:27.655 20:48:22 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:49:27.655 20:48:22 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:49:27.655 20:48:22 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:49:27.655 20:48:22 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:49:27.655 20:48:22 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:49:27.655 20:48:22 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:49:27.655 20:48:22 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:49:27.655 20:48:22 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:49:27.655 20:48:22 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:49:27.655 20:48:22 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:27.655 20:48:22 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:27.655 20:48:22 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:27.655 20:48:22 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:27.655 20:48:22 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:27.655 20:48:22 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:27.655 20:48:22 keyring_file -- paths/export.sh@5 -- # export PATH 00:49:27.655 20:48:22 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:27.656 20:48:22 keyring_file -- nvmf/common.sh@51 -- # : 0 00:49:27.656 20:48:22 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:49:27.656 20:48:22 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:49:27.656 20:48:22 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:49:27.656 20:48:22 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:49:27.656 20:48:22 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:49:27.656 20:48:22 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:49:27.656 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:49:27.656 20:48:22 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:49:27.656 20:48:22 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:49:27.656 20:48:22 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:49:27.656 20:48:22 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:49:27.656 20:48:22 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:49:27.656 20:48:22 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:49:27.656 20:48:22 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:49:27.656 20:48:22 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:49:27.656 20:48:22 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:49:27.656 20:48:22 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:49:27.656 20:48:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:49:27.656 20:48:22 keyring_file -- keyring/common.sh@17 -- # name=key0 00:49:27.656 20:48:22 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:49:27.656 20:48:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:49:27.656 20:48:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:49:27.656 20:48:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.bHLhmCBS9z 00:49:27.656 20:48:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:49:27.656 20:48:22 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:49:27.656 20:48:22 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:49:27.656 20:48:22 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:49:27.656 20:48:22 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:49:27.656 20:48:22 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:49:27.656 20:48:22 keyring_file -- nvmf/common.sh@731 -- # python - 00:49:27.656 20:48:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.bHLhmCBS9z 00:49:27.656 20:48:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.bHLhmCBS9z 00:49:27.656 20:48:22 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.bHLhmCBS9z 00:49:27.656 20:48:22 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:49:27.656 20:48:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:49:27.656 20:48:22 keyring_file -- keyring/common.sh@17 -- # name=key1 00:49:27.656 20:48:22 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:49:27.656 20:48:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:49:27.656 20:48:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:49:27.656 20:48:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ZAr5KZU4WV 00:49:27.656 20:48:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:49:27.656 20:48:22 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:49:27.656 20:48:22 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:49:27.656 20:48:22 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:49:27.656 20:48:22 keyring_file -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:49:27.656 20:48:22 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:49:27.656 20:48:22 keyring_file -- nvmf/common.sh@731 -- # python - 00:49:27.963 20:48:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ZAr5KZU4WV 00:49:27.963 20:48:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ZAr5KZU4WV 00:49:27.963 20:48:22 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.ZAr5KZU4WV 00:49:27.963 20:48:22 keyring_file -- keyring/file.sh@30 -- # tgtpid=110023 00:49:27.963 20:48:22 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:49:27.963 20:48:22 keyring_file -- keyring/file.sh@32 -- # waitforlisten 110023 00:49:27.963 20:48:22 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 110023 ']' 00:49:27.963 20:48:22 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:27.963 20:48:22 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:49:27.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:27.963 20:48:22 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:27.963 20:48:22 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:49:27.963 20:48:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:49:27.963 [2024-10-01 20:48:23.010768] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:49:27.963 [2024-10-01 20:48:23.010933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110023 ] 00:49:27.963 [2024-10-01 20:48:23.163719] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:28.222 [2024-10-01 20:48:23.233046] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:49:28.482 20:48:23 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:49:28.482 20:48:23 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:49:28.482 20:48:23 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:49:28.482 20:48:23 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:49:28.482 20:48:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:49:28.482 [2024-10-01 20:48:23.494526] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:49:28.482 null0 00:49:28.482 [2024-10-01 20:48:23.526490] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:49:28.482 [2024-10-01 20:48:23.526734] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:49:28.482 20:48:23 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:49:28.482 20:48:23 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:49:28.482 20:48:23 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:49:28.482 20:48:23 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:49:28.482 20:48:23 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:49:28.482 20:48:23 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:49:28.482 20:48:23 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:49:28.482 20:48:23 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:49:28.482 20:48:23 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:49:28.482 20:48:23 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:49:28.482 20:48:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:49:28.482 [2024-10-01 20:48:23.554500] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:49:28.482 2024/10/01 20:48:23 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:49:28.482 request: 00:49:28.482 { 00:49:28.482 "method": "nvmf_subsystem_add_listener", 00:49:28.482 "params": { 00:49:28.482 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:49:28.482 "secure_channel": false, 00:49:28.482 "listen_address": { 00:49:28.483 "trtype": "tcp", 00:49:28.483 "traddr": "127.0.0.1", 00:49:28.483 "trsvcid": "4420" 00:49:28.483 } 00:49:28.483 } 00:49:28.483 } 00:49:28.483 Got JSON-RPC error response 00:49:28.483 GoRPCClient: error on JSON-RPC call 00:49:28.483 20:48:23 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:49:28.483 20:48:23 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:49:28.483 20:48:23 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:49:28.483 20:48:23 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:49:28.483 20:48:23 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:49:28.483 20:48:23 keyring_file -- keyring/file.sh@47 -- # bperfpid=110046 00:49:28.483 20:48:23 keyring_file -- keyring/file.sh@49 -- # waitforlisten 110046 /var/tmp/bperf.sock 00:49:28.483 20:48:23 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:49:28.483 20:48:23 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 110046 ']' 00:49:28.483 20:48:23 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:49:28.483 20:48:23 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:49:28.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:49:28.483 20:48:23 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:49:28.483 20:48:23 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:49:28.483 20:48:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:49:28.483 [2024-10-01 20:48:23.619989] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:49:28.483 [2024-10-01 20:48:23.620096] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110046 ] 00:49:28.743 [2024-10-01 20:48:23.760418] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:28.743 [2024-10-01 20:48:23.831101] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:49:28.743 20:48:23 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:49:28.743 20:48:23 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:49:28.743 20:48:23 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.bHLhmCBS9z 00:49:28.743 20:48:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.bHLhmCBS9z 00:49:29.002 20:48:24 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ZAr5KZU4WV 00:49:29.002 20:48:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ZAr5KZU4WV 00:49:29.570 20:48:24 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:49:29.570 20:48:24 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:49:29.570 20:48:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:49:29.570 20:48:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:49:29.570 20:48:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:29.829 20:48:24 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.bHLhmCBS9z == \/\t\m\p\/\t\m\p\.\b\H\L\h\m\C\B\S\9\z ]] 00:49:29.829 20:48:24 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:49:29.829 20:48:24 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:49:29.829 20:48:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:49:29.829 20:48:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:29.829 20:48:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:49:30.088 20:48:25 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.ZAr5KZU4WV == \/\t\m\p\/\t\m\p\.\Z\A\r\5\K\Z\U\4\W\V ]] 00:49:30.088 20:48:25 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:49:30.088 20:48:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:49:30.088 20:48:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:49:30.088 20:48:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:49:30.088 20:48:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:49:30.088 20:48:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:30.346 20:48:25 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:49:30.346 20:48:25 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:49:30.346 20:48:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:49:30.346 20:48:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:49:30.346 20:48:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:49:30.346 20:48:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:30.346 20:48:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:49:30.913 20:48:25 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:49:30.913 20:48:25 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:49:30.913 20:48:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:49:30.913 [2024-10-01 20:48:26.162341] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:49:31.172 nvme0n1 00:49:31.172 20:48:26 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:49:31.172 20:48:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:49:31.172 20:48:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:49:31.172 20:48:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:49:31.172 20:48:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:49:31.172 20:48:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:31.431 20:48:26 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:49:31.431 20:48:26 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:49:31.431 20:48:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:49:31.431 20:48:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:49:31.431 20:48:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:49:31.431 20:48:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:49:31.431 20:48:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:31.690 20:48:26 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:49:31.690 20:48:26 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:49:31.949 Running I/O for 1 seconds... 00:49:32.885 11554.00 IOPS, 45.13 MiB/s 00:49:32.885 Latency(us) 00:49:32.885 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:32.885 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:49:32.886 nvme0n1 : 1.01 11608.87 45.35 0.00 0.00 10996.61 4110.89 20614.05 00:49:32.886 =================================================================================================================== 00:49:32.886 Total : 11608.87 45.35 0.00 0.00 10996.61 4110.89 20614.05 00:49:32.886 { 00:49:32.886 "results": [ 00:49:32.886 { 00:49:32.886 "job": "nvme0n1", 00:49:32.886 "core_mask": "0x2", 00:49:32.886 "workload": "randrw", 00:49:32.886 "percentage": 50, 00:49:32.886 "status": "finished", 00:49:32.886 "queue_depth": 128, 00:49:32.886 "io_size": 4096, 00:49:32.886 "runtime": 1.006386, 00:49:32.886 "iops": 11608.865783109066, 00:49:32.886 "mibps": 45.34713196526979, 00:49:32.886 "io_failed": 0, 00:49:32.886 "io_timeout": 0, 00:49:32.886 "avg_latency_us": 10996.60988942753, 00:49:32.886 "min_latency_us": 4110.894545454546, 00:49:32.886 "max_latency_us": 20614.05090909091 00:49:32.886 } 00:49:32.886 ], 00:49:32.886 "core_count": 1 00:49:32.886 } 00:49:32.886 20:48:28 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:49:32.886 20:48:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:49:33.144 20:48:28 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:49:33.144 20:48:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:49:33.144 20:48:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:49:33.144 20:48:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:49:33.144 20:48:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:49:33.144 20:48:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:33.402 20:48:28 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:49:33.403 20:48:28 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:49:33.403 20:48:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:49:33.403 20:48:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:49:33.403 20:48:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:49:33.403 20:48:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:33.403 20:48:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:49:33.662 20:48:28 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:49:33.662 20:48:28 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:49:33.662 20:48:28 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:49:33.662 20:48:28 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:49:33.662 20:48:28 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:49:33.662 20:48:28 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:49:33.662 20:48:28 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:49:33.662 20:48:28 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:49:33.662 20:48:28 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:49:33.662 20:48:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:49:33.920 [2024-10-01 20:48:29.141318] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:49:33.920 [2024-10-01 20:48:29.141530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf28760 (107): Transport endpoint is not connected 00:49:33.920 [2024-10-01 20:48:29.142520] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf28760 (9): Bad file descriptor 00:49:33.920 [2024-10-01 20:48:29.143515] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:49:33.920 [2024-10-01 20:48:29.143541] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:49:33.920 [2024-10-01 20:48:29.143552] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:49:33.920 [2024-10-01 20:48:29.143564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:49:33.920 2024/10/01 20:48:29 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:49:33.920 request: 00:49:33.920 { 00:49:33.920 "method": "bdev_nvme_attach_controller", 00:49:33.920 "params": { 00:49:33.920 "name": "nvme0", 00:49:33.920 "trtype": "tcp", 00:49:33.920 "traddr": "127.0.0.1", 00:49:33.920 "adrfam": "ipv4", 00:49:33.920 "trsvcid": "4420", 00:49:33.920 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:49:33.920 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:49:33.920 "prchk_reftag": false, 00:49:33.920 "prchk_guard": false, 00:49:33.920 "hdgst": false, 00:49:33.920 "ddgst": false, 00:49:33.920 "psk": "key1", 00:49:33.920 "allow_unrecognized_csi": false 00:49:33.920 } 00:49:33.920 } 00:49:33.920 Got JSON-RPC error response 00:49:33.920 GoRPCClient: error on JSON-RPC call 00:49:34.179 20:48:29 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:49:34.179 20:48:29 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:49:34.179 20:48:29 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:49:34.179 20:48:29 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:49:34.179 20:48:29 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:49:34.179 20:48:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:49:34.179 20:48:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:49:34.179 20:48:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:49:34.179 20:48:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:34.179 20:48:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:49:34.438 20:48:29 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:49:34.438 20:48:29 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:49:34.438 20:48:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:49:34.438 20:48:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:49:34.438 20:48:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:49:34.438 20:48:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:34.438 20:48:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:49:34.697 20:48:29 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:49:34.697 20:48:29 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:49:34.697 20:48:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:49:34.988 20:48:30 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:49:34.988 20:48:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:49:35.269 20:48:30 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:49:35.269 20:48:30 keyring_file -- keyring/file.sh@78 -- # jq length 00:49:35.269 20:48:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:35.530 20:48:30 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:49:35.530 20:48:30 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.bHLhmCBS9z 00:49:35.530 20:48:30 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.bHLhmCBS9z 00:49:35.530 20:48:30 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:49:35.530 20:48:30 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.bHLhmCBS9z 00:49:35.530 20:48:30 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:49:35.530 20:48:30 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:49:35.530 20:48:30 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:49:35.530 20:48:30 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:49:35.530 20:48:30 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.bHLhmCBS9z 00:49:35.530 20:48:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.bHLhmCBS9z 00:49:35.789 [2024-10-01 20:48:30.979788] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.bHLhmCBS9z': 0100660 00:49:35.789 [2024-10-01 20:48:30.979841] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:49:35.789 2024/10/01 20:48:30 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.bHLhmCBS9z], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:49:35.789 request: 00:49:35.789 { 00:49:35.789 "method": "keyring_file_add_key", 00:49:35.789 "params": { 00:49:35.789 "name": "key0", 00:49:35.789 "path": "/tmp/tmp.bHLhmCBS9z" 00:49:35.789 } 00:49:35.789 } 00:49:35.789 Got JSON-RPC error response 00:49:35.789 GoRPCClient: error on JSON-RPC call 00:49:35.789 20:48:31 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:49:35.789 20:48:31 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:49:35.789 20:48:31 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:49:35.789 20:48:31 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:49:35.789 20:48:31 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.bHLhmCBS9z 00:49:35.789 20:48:31 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.bHLhmCBS9z 00:49:35.789 20:48:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.bHLhmCBS9z 00:49:36.356 20:48:31 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.bHLhmCBS9z 00:49:36.356 20:48:31 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:49:36.356 20:48:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:49:36.356 20:48:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:49:36.356 20:48:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:49:36.356 20:48:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:49:36.356 20:48:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:36.356 20:48:31 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:49:36.356 20:48:31 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:49:36.356 20:48:31 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:49:36.356 20:48:31 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:49:36.356 20:48:31 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:49:36.356 20:48:31 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:49:36.356 20:48:31 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:49:36.356 20:48:31 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:49:36.356 20:48:31 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:49:36.356 20:48:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:49:36.923 [2024-10-01 20:48:31.875984] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.bHLhmCBS9z': No such file or directory 00:49:36.923 [2024-10-01 20:48:31.876038] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:49:36.923 [2024-10-01 20:48:31.876061] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:49:36.923 [2024-10-01 20:48:31.876072] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:49:36.923 [2024-10-01 20:48:31.876082] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:49:36.923 [2024-10-01 20:48:31.876090] bdev_nvme.c:6447:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:49:36.923 2024/10/01 20:48:31 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:49:36.923 request: 00:49:36.923 { 00:49:36.923 "method": "bdev_nvme_attach_controller", 00:49:36.923 "params": { 00:49:36.923 "name": "nvme0", 00:49:36.923 "trtype": "tcp", 00:49:36.923 "traddr": "127.0.0.1", 00:49:36.923 "adrfam": "ipv4", 00:49:36.923 "trsvcid": "4420", 00:49:36.923 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:49:36.923 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:49:36.923 "prchk_reftag": false, 00:49:36.923 "prchk_guard": false, 00:49:36.923 "hdgst": false, 00:49:36.923 "ddgst": false, 00:49:36.923 "psk": "key0", 00:49:36.923 "allow_unrecognized_csi": false 00:49:36.923 } 00:49:36.923 } 00:49:36.923 Got JSON-RPC error response 00:49:36.923 GoRPCClient: error on JSON-RPC call 00:49:36.923 20:48:31 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:49:36.923 20:48:31 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:49:36.923 20:48:31 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:49:36.923 20:48:31 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:49:36.923 20:48:31 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:49:36.923 20:48:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:49:36.923 20:48:32 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:49:36.923 20:48:32 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:49:36.923 20:48:32 keyring_file -- keyring/common.sh@17 -- # name=key0 00:49:36.923 20:48:32 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:49:36.923 20:48:32 keyring_file -- keyring/common.sh@17 -- # digest=0 00:49:36.923 20:48:32 keyring_file -- keyring/common.sh@18 -- # mktemp 00:49:36.924 20:48:32 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.QxlGNHQECM 00:49:36.924 20:48:32 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:49:36.924 20:48:32 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:49:36.924 20:48:32 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:49:36.924 20:48:32 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:49:36.924 20:48:32 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:49:36.924 20:48:32 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:49:36.924 20:48:32 keyring_file -- nvmf/common.sh@731 -- # python - 00:49:37.182 20:48:32 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.QxlGNHQECM 00:49:37.182 20:48:32 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.QxlGNHQECM 00:49:37.182 20:48:32 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.QxlGNHQECM 00:49:37.182 20:48:32 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.QxlGNHQECM 00:49:37.182 20:48:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.QxlGNHQECM 00:49:37.441 20:48:32 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:49:37.441 20:48:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:49:37.700 nvme0n1 00:49:37.700 20:48:32 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:49:37.700 20:48:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:49:37.700 20:48:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:49:37.700 20:48:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:49:37.700 20:48:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:49:37.700 20:48:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:37.959 20:48:33 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:49:37.959 20:48:33 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:49:37.959 20:48:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:49:38.217 20:48:33 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:49:38.217 20:48:33 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:49:38.217 20:48:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:49:38.217 20:48:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:49:38.217 20:48:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:38.476 20:48:33 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:49:38.477 20:48:33 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:49:38.477 20:48:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:49:38.477 20:48:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:49:38.477 20:48:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:49:38.477 20:48:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:38.477 20:48:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:49:38.735 20:48:33 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:49:38.736 20:48:33 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:49:38.736 20:48:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:49:38.994 20:48:34 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:49:38.994 20:48:34 keyring_file -- keyring/file.sh@105 -- # jq length 00:49:38.994 20:48:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:39.253 20:48:34 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:49:39.253 20:48:34 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.QxlGNHQECM 00:49:39.253 20:48:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.QxlGNHQECM 00:49:39.511 20:48:34 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ZAr5KZU4WV 00:49:39.511 20:48:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ZAr5KZU4WV 00:49:39.770 20:48:34 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:49:39.770 20:48:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:49:40.028 nvme0n1 00:49:40.028 20:48:35 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:49:40.028 20:48:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:49:40.287 20:48:35 keyring_file -- keyring/file.sh@113 -- # config='{ 00:49:40.287 "subsystems": [ 00:49:40.287 { 00:49:40.287 "subsystem": "keyring", 00:49:40.287 "config": [ 00:49:40.287 { 00:49:40.287 "method": "keyring_file_add_key", 00:49:40.287 "params": { 00:49:40.287 "name": "key0", 00:49:40.287 "path": "/tmp/tmp.QxlGNHQECM" 00:49:40.287 } 00:49:40.287 }, 00:49:40.287 { 00:49:40.287 "method": "keyring_file_add_key", 00:49:40.287 "params": { 00:49:40.287 "name": "key1", 00:49:40.287 "path": "/tmp/tmp.ZAr5KZU4WV" 00:49:40.287 } 00:49:40.287 } 00:49:40.287 ] 00:49:40.287 }, 00:49:40.287 { 00:49:40.287 "subsystem": "iobuf", 00:49:40.287 "config": [ 00:49:40.287 { 00:49:40.287 "method": "iobuf_set_options", 00:49:40.287 "params": { 00:49:40.287 "large_bufsize": 135168, 00:49:40.287 "large_pool_count": 1024, 00:49:40.287 "small_bufsize": 8192, 00:49:40.287 "small_pool_count": 8192 00:49:40.287 } 00:49:40.287 } 00:49:40.287 ] 00:49:40.287 }, 00:49:40.287 { 00:49:40.287 "subsystem": "sock", 00:49:40.287 "config": [ 00:49:40.287 { 00:49:40.287 "method": "sock_set_default_impl", 00:49:40.287 "params": { 00:49:40.287 "impl_name": "posix" 00:49:40.287 } 00:49:40.287 }, 00:49:40.287 { 00:49:40.287 "method": "sock_impl_set_options", 00:49:40.287 "params": { 00:49:40.287 "enable_ktls": false, 00:49:40.287 "enable_placement_id": 0, 00:49:40.287 "enable_quickack": false, 00:49:40.287 "enable_recv_pipe": true, 00:49:40.287 "enable_zerocopy_send_client": false, 00:49:40.287 "enable_zerocopy_send_server": true, 00:49:40.287 "impl_name": "ssl", 00:49:40.287 "recv_buf_size": 4096, 00:49:40.287 "send_buf_size": 4096, 00:49:40.288 "tls_version": 0, 00:49:40.288 "zerocopy_threshold": 0 00:49:40.288 } 00:49:40.288 }, 00:49:40.288 { 00:49:40.288 "method": "sock_impl_set_options", 00:49:40.288 "params": { 00:49:40.288 "enable_ktls": false, 00:49:40.288 "enable_placement_id": 0, 00:49:40.288 "enable_quickack": false, 00:49:40.288 "enable_recv_pipe": true, 00:49:40.288 "enable_zerocopy_send_client": false, 00:49:40.288 "enable_zerocopy_send_server": true, 00:49:40.288 "impl_name": "posix", 00:49:40.288 "recv_buf_size": 2097152, 00:49:40.288 "send_buf_size": 2097152, 00:49:40.288 "tls_version": 0, 00:49:40.288 "zerocopy_threshold": 0 00:49:40.288 } 00:49:40.288 } 00:49:40.288 ] 00:49:40.288 }, 00:49:40.288 { 00:49:40.288 "subsystem": "vmd", 00:49:40.288 "config": [] 00:49:40.288 }, 00:49:40.288 { 00:49:40.288 "subsystem": "accel", 00:49:40.288 "config": [ 00:49:40.288 { 00:49:40.288 "method": "accel_set_options", 00:49:40.288 "params": { 00:49:40.288 "buf_count": 2048, 00:49:40.288 "large_cache_size": 16, 00:49:40.288 "sequence_count": 2048, 00:49:40.288 "small_cache_size": 128, 00:49:40.288 "task_count": 2048 00:49:40.288 } 00:49:40.288 } 00:49:40.288 ] 00:49:40.288 }, 00:49:40.288 { 00:49:40.288 "subsystem": "bdev", 00:49:40.288 "config": [ 00:49:40.288 { 00:49:40.288 "method": "bdev_set_options", 00:49:40.288 "params": { 00:49:40.288 "bdev_auto_examine": true, 00:49:40.288 "bdev_io_cache_size": 256, 00:49:40.288 "bdev_io_pool_size": 65535, 00:49:40.288 "bdev_io_stack_size": 4096, 00:49:40.288 "iobuf_large_cache_size": 16, 00:49:40.288 "iobuf_small_cache_size": 128 00:49:40.288 } 00:49:40.288 }, 00:49:40.288 { 00:49:40.288 "method": "bdev_raid_set_options", 00:49:40.288 "params": { 00:49:40.288 "process_max_bandwidth_mb_sec": 0, 00:49:40.288 "process_window_size_kb": 1024 00:49:40.288 } 00:49:40.288 }, 00:49:40.288 { 00:49:40.288 "method": "bdev_iscsi_set_options", 00:49:40.288 "params": { 00:49:40.288 "timeout_sec": 30 00:49:40.288 } 00:49:40.288 }, 00:49:40.288 { 00:49:40.288 "method": "bdev_nvme_set_options", 00:49:40.288 "params": { 00:49:40.288 "action_on_timeout": "none", 00:49:40.288 "allow_accel_sequence": false, 00:49:40.288 "arbitration_burst": 0, 00:49:40.288 "bdev_retry_count": 3, 00:49:40.288 "ctrlr_loss_timeout_sec": 0, 00:49:40.288 "delay_cmd_submit": true, 00:49:40.288 "dhchap_dhgroups": [ 00:49:40.288 "null", 00:49:40.288 "ffdhe2048", 00:49:40.288 "ffdhe3072", 00:49:40.288 "ffdhe4096", 00:49:40.288 "ffdhe6144", 00:49:40.288 "ffdhe8192" 00:49:40.288 ], 00:49:40.288 "dhchap_digests": [ 00:49:40.288 "sha256", 00:49:40.288 "sha384", 00:49:40.288 "sha512" 00:49:40.288 ], 00:49:40.288 "disable_auto_failback": false, 00:49:40.288 "fast_io_fail_timeout_sec": 0, 00:49:40.288 "generate_uuids": false, 00:49:40.288 "high_priority_weight": 0, 00:49:40.288 "io_path_stat": false, 00:49:40.288 "io_queue_requests": 512, 00:49:40.288 "keep_alive_timeout_ms": 10000, 00:49:40.288 "low_priority_weight": 0, 00:49:40.288 "medium_priority_weight": 0, 00:49:40.288 "nvme_adminq_poll_period_us": 10000, 00:49:40.288 "nvme_error_stat": false, 00:49:40.288 "nvme_ioq_poll_period_us": 0, 00:49:40.288 "rdma_cm_event_timeout_ms": 0, 00:49:40.288 "rdma_max_cq_size": 0, 00:49:40.288 "rdma_srq_size": 0, 00:49:40.288 "reconnect_delay_sec": 0, 00:49:40.288 "timeout_admin_us": 0, 00:49:40.288 "timeout_us": 0, 00:49:40.288 "transport_ack_timeout": 0, 00:49:40.288 "transport_retry_count": 4, 00:49:40.288 "transport_tos": 0 00:49:40.288 } 00:49:40.288 }, 00:49:40.288 { 00:49:40.288 "method": "bdev_nvme_attach_controller", 00:49:40.288 "params": { 00:49:40.288 "adrfam": "IPv4", 00:49:40.288 "ctrlr_loss_timeout_sec": 0, 00:49:40.288 "ddgst": false, 00:49:40.288 "fast_io_fail_timeout_sec": 0, 00:49:40.288 "hdgst": false, 00:49:40.288 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:49:40.288 "name": "nvme0", 00:49:40.288 "prchk_guard": false, 00:49:40.288 "prchk_reftag": false, 00:49:40.288 "psk": "key0", 00:49:40.288 "reconnect_delay_sec": 0, 00:49:40.288 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:49:40.288 "traddr": "127.0.0.1", 00:49:40.288 "trsvcid": "4420", 00:49:40.288 "trtype": "TCP" 00:49:40.288 } 00:49:40.288 }, 00:49:40.288 { 00:49:40.288 "method": "bdev_nvme_set_hotplug", 00:49:40.288 "params": { 00:49:40.288 "enable": false, 00:49:40.288 "period_us": 100000 00:49:40.288 } 00:49:40.288 }, 00:49:40.288 { 00:49:40.288 "method": "bdev_wait_for_examine" 00:49:40.288 } 00:49:40.288 ] 00:49:40.288 }, 00:49:40.288 { 00:49:40.288 "subsystem": "nbd", 00:49:40.288 "config": [] 00:49:40.288 } 00:49:40.288 ] 00:49:40.288 }' 00:49:40.288 20:48:35 keyring_file -- keyring/file.sh@115 -- # killprocess 110046 00:49:40.288 20:48:35 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 110046 ']' 00:49:40.288 20:48:35 keyring_file -- common/autotest_common.sh@954 -- # kill -0 110046 00:49:40.288 20:48:35 keyring_file -- common/autotest_common.sh@955 -- # uname 00:49:40.547 20:48:35 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:49:40.547 20:48:35 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 110046 00:49:40.547 killing process with pid 110046 00:49:40.547 Received shutdown signal, test time was about 1.000000 seconds 00:49:40.547 00:49:40.547 Latency(us) 00:49:40.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:40.547 =================================================================================================================== 00:49:40.547 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:49:40.547 20:48:35 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:49:40.547 20:48:35 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:49:40.547 20:48:35 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 110046' 00:49:40.547 20:48:35 keyring_file -- common/autotest_common.sh@969 -- # kill 110046 00:49:40.547 20:48:35 keyring_file -- common/autotest_common.sh@974 -- # wait 110046 00:49:40.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:49:40.547 20:48:35 keyring_file -- keyring/file.sh@118 -- # bperfpid=110506 00:49:40.547 20:48:35 keyring_file -- keyring/file.sh@120 -- # waitforlisten 110506 /var/tmp/bperf.sock 00:49:40.547 20:48:35 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 110506 ']' 00:49:40.547 20:48:35 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:49:40.547 20:48:35 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:49:40.548 20:48:35 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:49:40.548 20:48:35 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:49:40.548 20:48:35 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:49:40.548 "subsystems": [ 00:49:40.548 { 00:49:40.548 "subsystem": "keyring", 00:49:40.548 "config": [ 00:49:40.548 { 00:49:40.548 "method": "keyring_file_add_key", 00:49:40.548 "params": { 00:49:40.548 "name": "key0", 00:49:40.548 "path": "/tmp/tmp.QxlGNHQECM" 00:49:40.548 } 00:49:40.548 }, 00:49:40.548 { 00:49:40.548 "method": "keyring_file_add_key", 00:49:40.548 "params": { 00:49:40.548 "name": "key1", 00:49:40.548 "path": "/tmp/tmp.ZAr5KZU4WV" 00:49:40.548 } 00:49:40.548 } 00:49:40.548 ] 00:49:40.548 }, 00:49:40.548 { 00:49:40.548 "subsystem": "iobuf", 00:49:40.548 "config": [ 00:49:40.548 { 00:49:40.548 "method": "iobuf_set_options", 00:49:40.548 "params": { 00:49:40.548 "large_bufsize": 135168, 00:49:40.548 "large_pool_count": 1024, 00:49:40.548 "small_bufsize": 8192, 00:49:40.548 "small_pool_count": 8192 00:49:40.548 } 00:49:40.548 } 00:49:40.548 ] 00:49:40.548 }, 00:49:40.548 { 00:49:40.548 "subsystem": "sock", 00:49:40.548 "config": [ 00:49:40.548 { 00:49:40.548 "method": "sock_set_default_impl", 00:49:40.548 "params": { 00:49:40.548 "impl_name": "posix" 00:49:40.548 } 00:49:40.548 }, 00:49:40.548 { 00:49:40.548 "method": "sock_impl_set_options", 00:49:40.548 "params": { 00:49:40.548 "enable_ktls": false, 00:49:40.548 "enable_placement_id": 0, 00:49:40.548 "enable_quickack": false, 00:49:40.548 "enable_recv_pipe": true, 00:49:40.548 "enable_zerocopy_send_client": false, 00:49:40.548 "enable_zerocopy_send_server": true, 00:49:40.548 "impl_name": "ssl", 00:49:40.548 "recv_buf_size": 4096, 00:49:40.548 "send_buf_size": 4096, 00:49:40.548 "tls_version": 0, 00:49:40.548 "zerocopy_threshold": 0 00:49:40.548 } 00:49:40.548 }, 00:49:40.548 { 00:49:40.548 "method": "sock_impl_set_options", 00:49:40.548 "params": { 00:49:40.548 "enable_ktls": false, 00:49:40.548 "enable_placement_id": 0, 00:49:40.548 "enable_quickack": false, 00:49:40.548 "enable_recv_pipe": true, 00:49:40.548 "enable_zerocopy_send_client": false, 00:49:40.548 "enable_zerocopy_send_server": true, 00:49:40.548 "impl_name": "posix", 00:49:40.548 "recv_buf_size": 2097152, 00:49:40.548 "send_buf_size": 2097152, 00:49:40.548 "tls_version": 0, 00:49:40.548 "zerocopy_threshold": 0 00:49:40.548 } 00:49:40.548 } 00:49:40.548 ] 00:49:40.548 }, 00:49:40.548 { 00:49:40.548 "subsystem": "vmd", 00:49:40.548 "config": [] 00:49:40.548 }, 00:49:40.548 { 00:49:40.548 "subsystem": "accel", 00:49:40.548 "config": [ 00:49:40.548 { 00:49:40.548 "method": "accel_set_options", 00:49:40.548 "params": { 00:49:40.548 "buf_count": 2048, 00:49:40.548 "large_cache_size": 16, 00:49:40.548 "sequence_count": 2048, 00:49:40.548 "small_cache_size": 128, 00:49:40.548 "task_count": 2048 00:49:40.548 } 00:49:40.548 } 00:49:40.548 ] 00:49:40.548 }, 00:49:40.548 { 00:49:40.548 "subsystem": "bdev", 00:49:40.548 "config": [ 00:49:40.548 { 00:49:40.548 "method": "bdev_set_options", 00:49:40.548 "params": { 00:49:40.548 "bdev_auto_examine": true, 00:49:40.548 "bdev_io_cache_size": 256, 00:49:40.548 "bdev_io_pool_size": 65535, 00:49:40.548 "bdev_io_stack_size": 4096, 00:49:40.548 "iobuf_large_cache_size": 16, 00:49:40.548 "iobuf_small_cache_size": 128 00:49:40.548 } 00:49:40.548 }, 00:49:40.548 { 00:49:40.548 "method": "bdev_raid_set_options", 00:49:40.548 "params": { 00:49:40.548 "process_max_bandwidth_mb_sec": 0, 00:49:40.548 "process_window_size_kb": 1024 00:49:40.548 } 00:49:40.548 }, 00:49:40.548 { 00:49:40.548 "method": "bdev_iscsi_set_options", 00:49:40.548 "params": { 00:49:40.548 "timeout_sec": 30 00:49:40.548 } 00:49:40.548 }, 00:49:40.548 { 00:49:40.548 "method": "bdev_nvme_set_options", 00:49:40.548 "params": { 00:49:40.548 "action_on_timeout": "none", 00:49:40.548 "allow_accel_sequence": false, 00:49:40.548 "arbitration_burst": 0, 00:49:40.548 "bdev_retry_count": 3, 00:49:40.548 "ctrlr_loss_timeout_sec": 0, 00:49:40.548 "delay_cmd_submit": true, 00:49:40.548 "dhchap_dhgroups": [ 00:49:40.548 "null", 00:49:40.548 "ffdhe2048", 00:49:40.548 "ffdhe3072", 00:49:40.548 "ffdhe4096", 00:49:40.548 "ffdhe6144", 00:49:40.548 "ffdhe8192" 00:49:40.548 ], 00:49:40.548 "dhchap_digests": [ 00:49:40.548 "sha256", 00:49:40.548 "sha384", 00:49:40.548 "sha512" 00:49:40.548 ], 00:49:40.548 "disable_auto_failback": false, 00:49:40.548 "fast_io_fail_timeout_sec": 0, 00:49:40.548 "generate_uuids": false, 00:49:40.548 "high_priority_weight": 0, 00:49:40.548 "io_path_stat": false, 00:49:40.548 "io_queue_requests": 512, 00:49:40.548 "keep_alive_timeout_ms": 10000, 00:49:40.548 "low_priority_weight": 0, 00:49:40.548 "medium_priority_weight": 0, 00:49:40.548 "nvme_adminq_poll_period_us": 10000, 00:49:40.548 "nvme_error_stat": false, 00:49:40.548 "nvme_ioq_poll_period_us": 0, 00:49:40.548 "rdma_cm_event_timeout_ms": 0, 00:49:40.548 "rdma_max_cq_size": 0, 00:49:40.548 "rdma_srq_size": 0, 00:49:40.548 "reconnect_delay_sec": 0, 00:49:40.548 "timeout_admin_us": 0, 00:49:40.548 "timeout_us": 0, 00:49:40.548 "transport_ack_timeout": 0, 00:49:40.548 "transport_retry_count": 4, 00:49:40.548 "transport_tos": 0 00:49:40.548 } 00:49:40.548 }, 00:49:40.548 { 00:49:40.548 "method": "bdev_nvme_attach_controller", 00:49:40.548 "params": { 00:49:40.548 "adrfam": "IPv4", 00:49:40.548 "ctrlr_loss_timeout_sec": 0, 00:49:40.548 "ddgst": false, 00:49:40.548 "fast_io_fail_timeout_sec": 0, 00:49:40.548 "hdgst": false, 00:49:40.548 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:49:40.548 "name": "nvme0", 00:49:40.548 "prchk_guard": false, 00:49:40.548 "prchk_reftag": false, 00:49:40.548 "psk": "key0", 00:49:40.548 "reconnect_delay_sec": 0, 00:49:40.548 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:49:40.548 "traddr": "127.0.0.1", 00:49:40.548 "trsvcid": "4420", 00:49:40.548 "trtype": "TCP" 00:49:40.548 } 00:49:40.548 }, 00:49:40.548 { 00:49:40.548 "method": "bdev_nvme_set_hotplug", 00:49:40.548 "params": { 00:49:40.548 "enable": false, 00:49:40.548 "period_us": 100000 00:49:40.548 } 00:49:40.548 }, 00:49:40.548 { 00:49:40.548 "method": "bdev_wait_for_examine" 00:49:40.548 } 00:49:40.548 ] 00:49:40.548 }, 00:49:40.548 { 00:49:40.548 "subsystem": "nbd", 00:49:40.548 "config": [] 00:49:40.548 } 00:49:40.548 ] 00:49:40.548 }' 00:49:40.548 20:48:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:49:40.548 20:48:35 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:49:40.807 [2024-10-01 20:48:35.829211] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:49:40.807 [2024-10-01 20:48:35.829317] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110506 ] 00:49:40.807 [2024-10-01 20:48:35.959510] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:40.807 [2024-10-01 20:48:36.015625] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:49:41.066 [2024-10-01 20:48:36.198948] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:49:41.635 20:48:36 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:49:41.635 20:48:36 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:49:41.635 20:48:36 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:49:41.635 20:48:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:41.635 20:48:36 keyring_file -- keyring/file.sh@121 -- # jq length 00:49:42.202 20:48:37 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:49:42.202 20:48:37 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:49:42.202 20:48:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:49:42.202 20:48:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:49:42.202 20:48:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:49:42.202 20:48:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:42.202 20:48:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:49:42.461 20:48:37 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:49:42.461 20:48:37 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:49:42.461 20:48:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:49:42.461 20:48:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:49:42.461 20:48:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:49:42.461 20:48:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:49:42.461 20:48:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:42.720 20:48:37 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:49:42.720 20:48:37 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:49:42.720 20:48:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:49:42.720 20:48:37 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:49:42.979 20:48:38 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:49:42.979 20:48:38 keyring_file -- keyring/file.sh@1 -- # cleanup 00:49:42.979 20:48:38 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.QxlGNHQECM /tmp/tmp.ZAr5KZU4WV 00:49:42.979 20:48:38 keyring_file -- keyring/file.sh@20 -- # killprocess 110506 00:49:42.979 20:48:38 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 110506 ']' 00:49:42.979 20:48:38 keyring_file -- common/autotest_common.sh@954 -- # kill -0 110506 00:49:42.979 20:48:38 keyring_file -- common/autotest_common.sh@955 -- # uname 00:49:42.979 20:48:38 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:49:42.979 20:48:38 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 110506 00:49:42.979 killing process with pid 110506 00:49:42.979 Received shutdown signal, test time was about 1.000000 seconds 00:49:42.979 00:49:42.979 Latency(us) 00:49:42.979 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:42.979 =================================================================================================================== 00:49:42.979 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:49:42.979 20:48:38 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:49:42.979 20:48:38 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:49:42.979 20:48:38 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 110506' 00:49:42.979 20:48:38 keyring_file -- common/autotest_common.sh@969 -- # kill 110506 00:49:42.979 20:48:38 keyring_file -- common/autotest_common.sh@974 -- # wait 110506 00:49:43.238 20:48:38 keyring_file -- keyring/file.sh@21 -- # killprocess 110023 00:49:43.238 20:48:38 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 110023 ']' 00:49:43.238 20:48:38 keyring_file -- common/autotest_common.sh@954 -- # kill -0 110023 00:49:43.238 20:48:38 keyring_file -- common/autotest_common.sh@955 -- # uname 00:49:43.238 20:48:38 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:49:43.238 20:48:38 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 110023 00:49:43.238 killing process with pid 110023 00:49:43.238 20:48:38 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:49:43.238 20:48:38 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:49:43.238 20:48:38 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 110023' 00:49:43.238 20:48:38 keyring_file -- common/autotest_common.sh@969 -- # kill 110023 00:49:43.238 20:48:38 keyring_file -- common/autotest_common.sh@974 -- # wait 110023 00:49:43.497 00:49:43.497 real 0m16.076s 00:49:43.497 user 0m41.443s 00:49:43.497 sys 0m3.171s 00:49:43.497 20:48:38 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:49:43.497 20:48:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:49:43.497 ************************************ 00:49:43.497 END TEST keyring_file 00:49:43.497 ************************************ 00:49:43.497 20:48:38 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:49:43.497 20:48:38 -- spdk/autotest.sh@290 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:49:43.497 20:48:38 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:49:43.497 20:48:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:49:43.497 20:48:38 -- common/autotest_common.sh@10 -- # set +x 00:49:43.497 ************************************ 00:49:43.497 START TEST keyring_linux 00:49:43.497 ************************************ 00:49:43.497 20:48:38 keyring_linux -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:49:43.497 Joined session keyring: 267704417 00:49:43.757 * Looking for test storage... 00:49:43.757 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:49:43.757 20:48:38 keyring_linux -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:49:43.757 20:48:38 keyring_linux -- common/autotest_common.sh@1681 -- # lcov --version 00:49:43.757 20:48:38 keyring_linux -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:49:43.757 20:48:38 keyring_linux -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:49:43.757 20:48:38 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:49:43.757 20:48:38 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:49:43.757 20:48:38 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:49:43.757 20:48:38 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:49:43.757 20:48:38 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:49:43.757 20:48:38 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:49:43.757 20:48:38 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:49:43.757 20:48:38 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:49:43.757 20:48:38 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:49:43.757 20:48:38 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:49:43.757 20:48:38 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:49:43.757 20:48:38 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:49:43.757 20:48:38 keyring_linux -- scripts/common.sh@345 -- # : 1 00:49:43.757 20:48:38 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:49:43.758 20:48:38 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:49:43.758 20:48:38 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:49:43.758 20:48:38 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:49:43.758 20:48:38 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:49:43.758 20:48:38 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:49:43.758 20:48:38 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:49:43.758 20:48:38 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:49:43.758 20:48:38 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:49:43.758 20:48:38 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:49:43.758 20:48:38 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:49:43.758 20:48:38 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:49:43.758 20:48:38 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:49:43.758 20:48:38 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:49:43.758 20:48:38 keyring_linux -- scripts/common.sh@368 -- # return 0 00:49:43.758 20:48:38 keyring_linux -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:49:43.758 20:48:38 keyring_linux -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:49:43.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:43.758 --rc genhtml_branch_coverage=1 00:49:43.758 --rc genhtml_function_coverage=1 00:49:43.758 --rc genhtml_legend=1 00:49:43.758 --rc geninfo_all_blocks=1 00:49:43.758 --rc geninfo_unexecuted_blocks=1 00:49:43.758 00:49:43.758 ' 00:49:43.758 20:48:38 keyring_linux -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:49:43.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:43.758 --rc genhtml_branch_coverage=1 00:49:43.758 --rc genhtml_function_coverage=1 00:49:43.758 --rc genhtml_legend=1 00:49:43.758 --rc geninfo_all_blocks=1 00:49:43.758 --rc geninfo_unexecuted_blocks=1 00:49:43.758 00:49:43.758 ' 00:49:43.758 20:48:38 keyring_linux -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:49:43.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:43.758 --rc genhtml_branch_coverage=1 00:49:43.758 --rc genhtml_function_coverage=1 00:49:43.758 --rc genhtml_legend=1 00:49:43.758 --rc geninfo_all_blocks=1 00:49:43.758 --rc geninfo_unexecuted_blocks=1 00:49:43.758 00:49:43.758 ' 00:49:43.758 20:48:38 keyring_linux -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:49:43.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:43.758 --rc genhtml_branch_coverage=1 00:49:43.758 --rc genhtml_function_coverage=1 00:49:43.758 --rc genhtml_legend=1 00:49:43.758 --rc geninfo_all_blocks=1 00:49:43.758 --rc geninfo_unexecuted_blocks=1 00:49:43.758 00:49:43.758 ' 00:49:43.758 20:48:38 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:49:43.758 20:48:38 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:49:43.758 20:48:38 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:49:43.758 20:48:38 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:49:43.758 20:48:38 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:49:43.758 20:48:38 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:49:43.758 20:48:38 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:49:43.758 20:48:38 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:49:43.758 20:48:38 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:49:43.758 20:48:38 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:49:43.758 20:48:38 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:49:43.758 20:48:38 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:49:43.758 20:48:38 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:49:43.758 20:48:38 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e1b29161-2f6e-4e88-9903-c5274a848703 00:49:43.758 20:48:38 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=e1b29161-2f6e-4e88-9903-c5274a848703 00:49:43.758 20:48:38 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:49:43.758 20:48:38 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:49:43.758 20:48:38 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:49:43.758 20:48:38 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:49:43.758 20:48:38 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:49:43.758 20:48:38 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:49:43.758 20:48:38 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:43.758 20:48:38 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:43.758 20:48:38 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:43.758 20:48:38 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:43.758 20:48:38 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:43.758 20:48:38 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:43.758 20:48:38 keyring_linux -- paths/export.sh@5 -- # export PATH 00:49:43.758 20:48:38 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:43.758 20:48:38 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:49:43.758 20:48:38 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:49:43.758 20:48:38 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:49:43.758 20:48:38 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:49:43.758 20:48:38 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:49:43.758 20:48:38 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:49:43.758 20:48:38 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:49:43.758 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:49:43.758 20:48:38 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:49:43.758 20:48:38 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:49:43.758 20:48:38 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:49:43.758 20:48:38 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:49:43.758 20:48:38 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:49:43.758 20:48:38 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:49:43.758 20:48:38 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:49:43.758 20:48:38 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:49:43.758 20:48:38 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:49:43.758 20:48:38 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:49:43.758 20:48:38 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:49:43.758 20:48:38 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:49:43.758 20:48:38 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:49:43.758 20:48:38 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:49:43.759 20:48:38 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:49:43.759 20:48:38 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:49:43.759 20:48:38 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:49:43.759 20:48:38 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:49:43.759 20:48:38 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:49:43.759 20:48:38 keyring_linux -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:49:43.759 20:48:38 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:49:43.759 20:48:38 keyring_linux -- nvmf/common.sh@731 -- # python - 00:49:43.759 20:48:38 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:49:43.759 /tmp/:spdk-test:key0 00:49:43.759 20:48:38 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:49:43.759 20:48:38 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:49:43.759 20:48:38 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:49:43.759 20:48:38 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:49:43.759 20:48:38 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:49:43.759 20:48:38 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:49:43.759 20:48:38 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:49:43.759 20:48:38 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:49:43.759 20:48:38 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:49:43.759 20:48:38 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:49:43.759 20:48:38 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:49:43.759 20:48:38 keyring_linux -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:49:43.759 20:48:38 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:49:43.759 20:48:38 keyring_linux -- nvmf/common.sh@731 -- # python - 00:49:44.017 20:48:39 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:49:44.017 /tmp/:spdk-test:key1 00:49:44.017 20:48:39 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:49:44.017 20:48:39 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:49:44.017 20:48:39 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=110673 00:49:44.017 20:48:39 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 110673 00:49:44.017 20:48:39 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 110673 ']' 00:49:44.017 20:48:39 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:44.017 20:48:39 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:49:44.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:44.017 20:48:39 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:44.017 20:48:39 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:49:44.017 20:48:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:49:44.017 [2024-10-01 20:48:39.088116] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:49:44.017 [2024-10-01 20:48:39.088239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110673 ] 00:49:44.017 [2024-10-01 20:48:39.222239] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:44.276 [2024-10-01 20:48:39.277011] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:49:44.276 20:48:39 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:49:44.276 20:48:39 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:49:44.276 20:48:39 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:49:44.276 20:48:39 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:49:44.276 20:48:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:49:44.276 [2024-10-01 20:48:39.496817] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:49:44.276 null0 00:49:44.276 [2024-10-01 20:48:39.528830] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:49:44.276 [2024-10-01 20:48:39.529008] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:49:44.550 20:48:39 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:49:44.550 20:48:39 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:49:44.550 94068001 00:49:44.550 20:48:39 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:49:44.550 644295408 00:49:44.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:49:44.550 20:48:39 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=110691 00:49:44.550 20:48:39 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:49:44.550 20:48:39 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 110691 /var/tmp/bperf.sock 00:49:44.550 20:48:39 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 110691 ']' 00:49:44.550 20:48:39 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:49:44.550 20:48:39 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:49:44.550 20:48:39 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:49:44.550 20:48:39 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:49:44.550 20:48:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:49:44.550 [2024-10-01 20:48:39.614661] Starting SPDK v25.01-pre git sha1 1b1c3081e / DPDK 24.03.0 initialization... 00:49:44.550 [2024-10-01 20:48:39.614783] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110691 ] 00:49:44.550 [2024-10-01 20:48:39.751745] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:44.818 [2024-10-01 20:48:39.809458] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:49:44.818 20:48:39 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:49:44.818 20:48:39 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:49:44.818 20:48:39 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:49:44.819 20:48:39 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:49:45.077 20:48:40 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:49:45.077 20:48:40 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:49:45.335 20:48:40 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:49:45.336 20:48:40 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:49:45.594 [2024-10-01 20:48:40.794674] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:49:45.853 nvme0n1 00:49:45.853 20:48:40 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:49:45.853 20:48:40 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:49:45.853 20:48:40 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:49:45.853 20:48:40 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:49:45.853 20:48:40 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:49:45.853 20:48:40 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:46.112 20:48:41 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:49:46.112 20:48:41 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:49:46.112 20:48:41 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:49:46.112 20:48:41 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:49:46.112 20:48:41 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:49:46.112 20:48:41 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:49:46.112 20:48:41 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:46.371 20:48:41 keyring_linux -- keyring/linux.sh@25 -- # sn=94068001 00:49:46.371 20:48:41 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:49:46.371 20:48:41 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:49:46.371 20:48:41 keyring_linux -- keyring/linux.sh@26 -- # [[ 94068001 == \9\4\0\6\8\0\0\1 ]] 00:49:46.371 20:48:41 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 94068001 00:49:46.371 20:48:41 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:49:46.371 20:48:41 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:49:46.371 Running I/O for 1 seconds... 00:49:47.565 13706.00 IOPS, 53.54 MiB/s 00:49:47.565 Latency(us) 00:49:47.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:47.565 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:49:47.565 nvme0n1 : 1.01 13713.91 53.57 0.00 0.00 9291.35 3172.54 12809.31 00:49:47.565 =================================================================================================================== 00:49:47.565 Total : 13713.91 53.57 0.00 0.00 9291.35 3172.54 12809.31 00:49:47.565 { 00:49:47.565 "results": [ 00:49:47.565 { 00:49:47.565 "job": "nvme0n1", 00:49:47.565 "core_mask": "0x2", 00:49:47.565 "workload": "randread", 00:49:47.565 "status": "finished", 00:49:47.565 "queue_depth": 128, 00:49:47.565 "io_size": 4096, 00:49:47.565 "runtime": 1.00883, 00:49:47.565 "iops": 13713.906208181756, 00:49:47.565 "mibps": 53.56994612570998, 00:49:47.565 "io_failed": 0, 00:49:47.565 "io_timeout": 0, 00:49:47.565 "avg_latency_us": 9291.34987442915, 00:49:47.565 "min_latency_us": 3172.538181818182, 00:49:47.565 "max_latency_us": 12809.309090909092 00:49:47.565 } 00:49:47.565 ], 00:49:47.565 "core_count": 1 00:49:47.565 } 00:49:47.565 20:48:42 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:49:47.566 20:48:42 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:49:47.824 20:48:42 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:49:47.824 20:48:42 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:49:47.824 20:48:42 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:49:47.824 20:48:42 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:49:47.824 20:48:42 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:49:47.824 20:48:42 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:49:48.083 20:48:43 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:49:48.083 20:48:43 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:49:48.083 20:48:43 keyring_linux -- keyring/linux.sh@23 -- # return 00:49:48.083 20:48:43 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:49:48.083 20:48:43 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:49:48.083 20:48:43 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:49:48.083 20:48:43 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:49:48.083 20:48:43 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:49:48.083 20:48:43 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:49:48.083 20:48:43 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:49:48.083 20:48:43 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:49:48.083 20:48:43 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:49:48.342 [2024-10-01 20:48:43.444095] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:49:48.342 [2024-10-01 20:48:43.444754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161b6a0 (107): Transport endpoint is not connected 00:49:48.342 [2024-10-01 20:48:43.445742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161b6a0 (9): Bad file descriptor 00:49:48.342 [2024-10-01 20:48:43.446738] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:49:48.342 [2024-10-01 20:48:43.446775] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:49:48.342 [2024-10-01 20:48:43.446801] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:49:48.342 [2024-10-01 20:48:43.446812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:49:48.342 2024/10/01 20:48:43 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:49:48.342 request: 00:49:48.342 { 00:49:48.342 "method": "bdev_nvme_attach_controller", 00:49:48.342 "params": { 00:49:48.342 "name": "nvme0", 00:49:48.342 "trtype": "tcp", 00:49:48.342 "traddr": "127.0.0.1", 00:49:48.342 "adrfam": "ipv4", 00:49:48.342 "trsvcid": "4420", 00:49:48.342 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:49:48.342 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:49:48.342 "prchk_reftag": false, 00:49:48.342 "prchk_guard": false, 00:49:48.342 "hdgst": false, 00:49:48.342 "ddgst": false, 00:49:48.342 "psk": ":spdk-test:key1", 00:49:48.342 "allow_unrecognized_csi": false 00:49:48.342 } 00:49:48.342 } 00:49:48.342 Got JSON-RPC error response 00:49:48.342 GoRPCClient: error on JSON-RPC call 00:49:48.342 20:48:43 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:49:48.342 20:48:43 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:49:48.342 20:48:43 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:49:48.342 20:48:43 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:49:48.342 20:48:43 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:49:48.342 20:48:43 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:49:48.342 20:48:43 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:49:48.342 20:48:43 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:49:48.342 20:48:43 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:49:48.342 20:48:43 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:49:48.342 20:48:43 keyring_linux -- keyring/linux.sh@33 -- # sn=94068001 00:49:48.342 20:48:43 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 94068001 00:49:48.342 1 links removed 00:49:48.342 20:48:43 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:49:48.342 20:48:43 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:49:48.343 20:48:43 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:49:48.343 20:48:43 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:49:48.343 20:48:43 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:49:48.343 20:48:43 keyring_linux -- keyring/linux.sh@33 -- # sn=644295408 00:49:48.343 20:48:43 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 644295408 00:49:48.343 1 links removed 00:49:48.343 20:48:43 keyring_linux -- keyring/linux.sh@41 -- # killprocess 110691 00:49:48.343 20:48:43 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 110691 ']' 00:49:48.343 20:48:43 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 110691 00:49:48.343 20:48:43 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:49:48.343 20:48:43 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:49:48.343 20:48:43 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 110691 00:49:48.343 killing process with pid 110691 00:49:48.343 Received shutdown signal, test time was about 1.000000 seconds 00:49:48.343 00:49:48.343 Latency(us) 00:49:48.343 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:48.343 =================================================================================================================== 00:49:48.343 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:49:48.343 20:48:43 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:49:48.343 20:48:43 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:49:48.343 20:48:43 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 110691' 00:49:48.343 20:48:43 keyring_linux -- common/autotest_common.sh@969 -- # kill 110691 00:49:48.343 20:48:43 keyring_linux -- common/autotest_common.sh@974 -- # wait 110691 00:49:48.602 20:48:43 keyring_linux -- keyring/linux.sh@42 -- # killprocess 110673 00:49:48.602 20:48:43 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 110673 ']' 00:49:48.602 20:48:43 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 110673 00:49:48.602 20:48:43 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:49:48.602 20:48:43 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:49:48.602 20:48:43 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 110673 00:49:48.602 killing process with pid 110673 00:49:48.602 20:48:43 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:49:48.602 20:48:43 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:49:48.602 20:48:43 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 110673' 00:49:48.602 20:48:43 keyring_linux -- common/autotest_common.sh@969 -- # kill 110673 00:49:48.602 20:48:43 keyring_linux -- common/autotest_common.sh@974 -- # wait 110673 00:49:48.860 00:49:48.860 real 0m5.325s 00:49:48.860 user 0m10.750s 00:49:48.860 sys 0m1.501s 00:49:48.860 20:48:44 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:49:48.861 20:48:44 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:49:48.861 ************************************ 00:49:48.861 END TEST keyring_linux 00:49:48.861 ************************************ 00:49:48.861 20:48:44 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:49:48.861 20:48:44 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:49:48.861 20:48:44 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:49:48.861 20:48:44 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:49:48.861 20:48:44 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:49:48.861 20:48:44 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:49:48.861 20:48:44 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:49:48.861 20:48:44 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:49:48.861 20:48:44 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:49:48.861 20:48:44 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:49:48.861 20:48:44 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:49:48.861 20:48:44 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:49:48.861 20:48:44 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:49:48.861 20:48:44 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:49:48.861 20:48:44 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:49:48.861 20:48:44 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:49:48.861 20:48:44 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:49:48.861 20:48:44 -- common/autotest_common.sh@724 -- # xtrace_disable 00:49:48.861 20:48:44 -- common/autotest_common.sh@10 -- # set +x 00:49:48.861 20:48:44 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:49:48.861 20:48:44 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:49:48.861 20:48:44 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:49:48.861 20:48:44 -- common/autotest_common.sh@10 -- # set +x 00:49:50.764 INFO: APP EXITING 00:49:50.764 INFO: killing all VMs 00:49:50.764 INFO: killing vhost app 00:49:50.764 INFO: EXIT DONE 00:49:51.332 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:49:51.332 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:49:51.332 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:49:52.269 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:49:52.269 Cleaning 00:49:52.269 Removing: /var/run/dpdk/spdk0/config 00:49:52.269 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:49:52.269 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:49:52.269 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:49:52.269 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:49:52.269 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:49:52.269 Removing: /var/run/dpdk/spdk0/hugepage_info 00:49:52.269 Removing: /var/run/dpdk/spdk1/config 00:49:52.269 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:49:52.269 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:49:52.269 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:49:52.269 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:49:52.269 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:49:52.269 Removing: /var/run/dpdk/spdk1/hugepage_info 00:49:52.269 Removing: /var/run/dpdk/spdk2/config 00:49:52.269 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:49:52.269 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:49:52.269 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:49:52.269 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:49:52.269 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:49:52.269 Removing: /var/run/dpdk/spdk2/hugepage_info 00:49:52.269 Removing: /var/run/dpdk/spdk3/config 00:49:52.269 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:49:52.269 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:49:52.269 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:49:52.269 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:49:52.269 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:49:52.269 Removing: /var/run/dpdk/spdk3/hugepage_info 00:49:52.269 Removing: /var/run/dpdk/spdk4/config 00:49:52.269 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:49:52.269 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:49:52.269 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:49:52.269 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:49:52.269 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:49:52.269 Removing: /var/run/dpdk/spdk4/hugepage_info 00:49:52.269 Removing: /dev/shm/nvmf_trace.0 00:49:52.269 Removing: /dev/shm/spdk_tgt_trace.pid58624 00:49:52.269 Removing: /var/run/dpdk/spdk0 00:49:52.269 Removing: /var/run/dpdk/spdk1 00:49:52.269 Removing: /var/run/dpdk/spdk2 00:49:52.269 Removing: /var/run/dpdk/spdk3 00:49:52.269 Removing: /var/run/dpdk/spdk4 00:49:52.269 Removing: /var/run/dpdk/spdk_pid100551 00:49:52.270 Removing: /var/run/dpdk/spdk_pid100597 00:49:52.270 Removing: /var/run/dpdk/spdk_pid100936 00:49:52.270 Removing: /var/run/dpdk/spdk_pid100982 00:49:52.270 Removing: /var/run/dpdk/spdk_pid101378 00:49:52.270 Removing: /var/run/dpdk/spdk_pid101954 00:49:52.270 Removing: /var/run/dpdk/spdk_pid102376 00:49:52.270 Removing: /var/run/dpdk/spdk_pid103392 00:49:52.270 Removing: /var/run/dpdk/spdk_pid104445 00:49:52.270 Removing: /var/run/dpdk/spdk_pid104559 00:49:52.270 Removing: /var/run/dpdk/spdk_pid104616 00:49:52.270 Removing: /var/run/dpdk/spdk_pid106193 00:49:52.270 Removing: /var/run/dpdk/spdk_pid106498 00:49:52.270 Removing: /var/run/dpdk/spdk_pid106837 00:49:52.270 Removing: /var/run/dpdk/spdk_pid107406 00:49:52.270 Removing: /var/run/dpdk/spdk_pid107411 00:49:52.270 Removing: /var/run/dpdk/spdk_pid107806 00:49:52.270 Removing: /var/run/dpdk/spdk_pid107961 00:49:52.270 Removing: /var/run/dpdk/spdk_pid108113 00:49:52.270 Removing: /var/run/dpdk/spdk_pid108216 00:49:52.270 Removing: /var/run/dpdk/spdk_pid108354 00:49:52.270 Removing: /var/run/dpdk/spdk_pid108459 00:49:52.270 Removing: /var/run/dpdk/spdk_pid109169 00:49:52.270 Removing: /var/run/dpdk/spdk_pid109204 00:49:52.270 Removing: /var/run/dpdk/spdk_pid109238 00:49:52.270 Removing: /var/run/dpdk/spdk_pid109490 00:49:52.270 Removing: /var/run/dpdk/spdk_pid109525 00:49:52.270 Removing: /var/run/dpdk/spdk_pid109555 00:49:52.270 Removing: /var/run/dpdk/spdk_pid110023 00:49:52.270 Removing: /var/run/dpdk/spdk_pid110046 00:49:52.270 Removing: /var/run/dpdk/spdk_pid110506 00:49:52.270 Removing: /var/run/dpdk/spdk_pid110673 00:49:52.270 Removing: /var/run/dpdk/spdk_pid110691 00:49:52.270 Removing: /var/run/dpdk/spdk_pid58477 00:49:52.270 Removing: /var/run/dpdk/spdk_pid58624 00:49:52.270 Removing: /var/run/dpdk/spdk_pid58880 00:49:52.270 Removing: /var/run/dpdk/spdk_pid58967 00:49:52.270 Removing: /var/run/dpdk/spdk_pid58993 00:49:52.270 Removing: /var/run/dpdk/spdk_pid59102 00:49:52.270 Removing: /var/run/dpdk/spdk_pid59119 00:49:52.270 Removing: /var/run/dpdk/spdk_pid59258 00:49:52.270 Removing: /var/run/dpdk/spdk_pid59549 00:49:52.270 Removing: /var/run/dpdk/spdk_pid59733 00:49:52.270 Removing: /var/run/dpdk/spdk_pid59817 00:49:52.270 Removing: /var/run/dpdk/spdk_pid59923 00:49:52.270 Removing: /var/run/dpdk/spdk_pid60007 00:49:52.270 Removing: /var/run/dpdk/spdk_pid60040 00:49:52.270 Removing: /var/run/dpdk/spdk_pid60075 00:49:52.270 Removing: /var/run/dpdk/spdk_pid60145 00:49:52.270 Removing: /var/run/dpdk/spdk_pid60227 00:49:52.270 Removing: /var/run/dpdk/spdk_pid60869 00:49:52.270 Removing: /var/run/dpdk/spdk_pid60920 00:49:52.270 Removing: /var/run/dpdk/spdk_pid60993 00:49:52.529 Removing: /var/run/dpdk/spdk_pid61003 00:49:52.529 Removing: /var/run/dpdk/spdk_pid61088 00:49:52.529 Removing: /var/run/dpdk/spdk_pid61116 00:49:52.529 Removing: /var/run/dpdk/spdk_pid61200 00:49:52.529 Removing: /var/run/dpdk/spdk_pid61215 00:49:52.529 Removing: /var/run/dpdk/spdk_pid61272 00:49:52.529 Removing: /var/run/dpdk/spdk_pid61284 00:49:52.529 Removing: /var/run/dpdk/spdk_pid61334 00:49:52.529 Removing: /var/run/dpdk/spdk_pid61351 00:49:52.529 Removing: /var/run/dpdk/spdk_pid61511 00:49:52.529 Removing: /var/run/dpdk/spdk_pid61541 00:49:52.530 Removing: /var/run/dpdk/spdk_pid61623 00:49:52.530 Removing: /var/run/dpdk/spdk_pid62111 00:49:52.530 Removing: /var/run/dpdk/spdk_pid62490 00:49:52.530 Removing: /var/run/dpdk/spdk_pid64958 00:49:52.530 Removing: /var/run/dpdk/spdk_pid65004 00:49:52.530 Removing: /var/run/dpdk/spdk_pid65347 00:49:52.530 Removing: /var/run/dpdk/spdk_pid65393 00:49:52.530 Removing: /var/run/dpdk/spdk_pid65799 00:49:52.530 Removing: /var/run/dpdk/spdk_pid66379 00:49:52.530 Removing: /var/run/dpdk/spdk_pid66825 00:49:52.530 Removing: /var/run/dpdk/spdk_pid67845 00:49:52.530 Removing: /var/run/dpdk/spdk_pid68893 00:49:52.530 Removing: /var/run/dpdk/spdk_pid69015 00:49:52.530 Removing: /var/run/dpdk/spdk_pid69082 00:49:52.530 Removing: /var/run/dpdk/spdk_pid70677 00:49:52.530 Removing: /var/run/dpdk/spdk_pid71030 00:49:52.530 Removing: /var/run/dpdk/spdk_pid74883 00:49:52.530 Removing: /var/run/dpdk/spdk_pid75299 00:49:52.530 Removing: /var/run/dpdk/spdk_pid75909 00:49:52.530 Removing: /var/run/dpdk/spdk_pid76353 00:49:52.530 Removing: /var/run/dpdk/spdk_pid82190 00:49:52.530 Removing: /var/run/dpdk/spdk_pid82697 00:49:52.530 Removing: /var/run/dpdk/spdk_pid82806 00:49:52.530 Removing: /var/run/dpdk/spdk_pid82970 00:49:52.530 Removing: /var/run/dpdk/spdk_pid83006 00:49:52.530 Removing: /var/run/dpdk/spdk_pid83044 00:49:52.530 Removing: /var/run/dpdk/spdk_pid83083 00:49:52.530 Removing: /var/run/dpdk/spdk_pid83261 00:49:52.530 Removing: /var/run/dpdk/spdk_pid83408 00:49:52.530 Removing: /var/run/dpdk/spdk_pid83662 00:49:52.530 Removing: /var/run/dpdk/spdk_pid83787 00:49:52.530 Removing: /var/run/dpdk/spdk_pid84038 00:49:52.530 Removing: /var/run/dpdk/spdk_pid84134 00:49:52.530 Removing: /var/run/dpdk/spdk_pid84255 00:49:52.530 Removing: /var/run/dpdk/spdk_pid84650 00:49:52.530 Removing: /var/run/dpdk/spdk_pid85077 00:49:52.530 Removing: /var/run/dpdk/spdk_pid85078 00:49:52.530 Removing: /var/run/dpdk/spdk_pid85079 00:49:52.530 Removing: /var/run/dpdk/spdk_pid85354 00:49:52.530 Removing: /var/run/dpdk/spdk_pid85682 00:49:52.530 Removing: /var/run/dpdk/spdk_pid86025 00:49:52.530 Removing: /var/run/dpdk/spdk_pid86629 00:49:52.530 Removing: /var/run/dpdk/spdk_pid86631 00:49:52.530 Removing: /var/run/dpdk/spdk_pid87008 00:49:52.530 Removing: /var/run/dpdk/spdk_pid87022 00:49:52.530 Removing: /var/run/dpdk/spdk_pid87042 00:49:52.530 Removing: /var/run/dpdk/spdk_pid87067 00:49:52.530 Removing: /var/run/dpdk/spdk_pid87078 00:49:52.530 Removing: /var/run/dpdk/spdk_pid87476 00:49:52.530 Removing: /var/run/dpdk/spdk_pid87525 00:49:52.530 Removing: /var/run/dpdk/spdk_pid87899 00:49:52.530 Removing: /var/run/dpdk/spdk_pid88150 00:49:52.530 Removing: /var/run/dpdk/spdk_pid88693 00:49:52.530 Removing: /var/run/dpdk/spdk_pid89304 00:49:52.530 Removing: /var/run/dpdk/spdk_pid90692 00:49:52.530 Removing: /var/run/dpdk/spdk_pid91355 00:49:52.530 Removing: /var/run/dpdk/spdk_pid91357 00:49:52.530 Removing: /var/run/dpdk/spdk_pid93529 00:49:52.530 Removing: /var/run/dpdk/spdk_pid93606 00:49:52.530 Removing: /var/run/dpdk/spdk_pid93695 00:49:52.530 Removing: /var/run/dpdk/spdk_pid93782 00:49:52.530 Removing: /var/run/dpdk/spdk_pid93946 00:49:52.530 Removing: /var/run/dpdk/spdk_pid94035 00:49:52.530 Removing: /var/run/dpdk/spdk_pid94121 00:49:52.530 Removing: /var/run/dpdk/spdk_pid94198 00:49:52.530 Removing: /var/run/dpdk/spdk_pid94566 00:49:52.530 Removing: /var/run/dpdk/spdk_pid95322 00:49:52.530 Removing: /var/run/dpdk/spdk_pid96717 00:49:52.530 Removing: /var/run/dpdk/spdk_pid96919 00:49:52.530 Removing: /var/run/dpdk/spdk_pid97211 00:49:52.530 Removing: /var/run/dpdk/spdk_pid97740 00:49:52.530 Removing: /var/run/dpdk/spdk_pid98095 00:49:52.530 Clean 00:49:52.788 20:48:47 -- common/autotest_common.sh@1451 -- # return 0 00:49:52.788 20:48:47 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:49:52.788 20:48:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:49:52.788 20:48:47 -- common/autotest_common.sh@10 -- # set +x 00:49:52.788 20:48:47 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:49:52.788 20:48:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:49:52.788 20:48:47 -- common/autotest_common.sh@10 -- # set +x 00:49:52.788 20:48:47 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:49:52.788 20:48:47 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:49:52.788 20:48:47 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:49:52.788 20:48:47 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:49:52.788 20:48:47 -- spdk/autotest.sh@394 -- # hostname 00:49:52.788 20:48:47 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:49:53.048 geninfo: WARNING: invalid characters removed from testname! 00:50:19.597 20:49:11 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:50:19.855 20:49:15 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:50:23.139 20:49:17 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:50:25.671 20:49:20 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:50:28.203 20:49:23 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:50:30.735 20:49:25 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:50:33.268 20:49:28 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:50:33.268 20:49:28 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:50:33.268 20:49:28 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:50:33.268 20:49:28 -- common/autotest_common.sh@1681 -- $ lcov --version 00:50:33.527 20:49:28 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:50:33.527 20:49:28 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:50:33.527 20:49:28 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:50:33.527 20:49:28 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:50:33.527 20:49:28 -- scripts/common.sh@336 -- $ IFS=.-: 00:50:33.527 20:49:28 -- scripts/common.sh@336 -- $ read -ra ver1 00:50:33.527 20:49:28 -- scripts/common.sh@337 -- $ IFS=.-: 00:50:33.527 20:49:28 -- scripts/common.sh@337 -- $ read -ra ver2 00:50:33.527 20:49:28 -- scripts/common.sh@338 -- $ local 'op=<' 00:50:33.527 20:49:28 -- scripts/common.sh@340 -- $ ver1_l=2 00:50:33.527 20:49:28 -- scripts/common.sh@341 -- $ ver2_l=1 00:50:33.527 20:49:28 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:50:33.527 20:49:28 -- scripts/common.sh@344 -- $ case "$op" in 00:50:33.527 20:49:28 -- scripts/common.sh@345 -- $ : 1 00:50:33.527 20:49:28 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:50:33.527 20:49:28 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:50:33.527 20:49:28 -- scripts/common.sh@365 -- $ decimal 1 00:50:33.527 20:49:28 -- scripts/common.sh@353 -- $ local d=1 00:50:33.527 20:49:28 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:50:33.527 20:49:28 -- scripts/common.sh@355 -- $ echo 1 00:50:33.527 20:49:28 -- scripts/common.sh@365 -- $ ver1[v]=1 00:50:33.527 20:49:28 -- scripts/common.sh@366 -- $ decimal 2 00:50:33.527 20:49:28 -- scripts/common.sh@353 -- $ local d=2 00:50:33.527 20:49:28 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:50:33.527 20:49:28 -- scripts/common.sh@355 -- $ echo 2 00:50:33.527 20:49:28 -- scripts/common.sh@366 -- $ ver2[v]=2 00:50:33.527 20:49:28 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:50:33.527 20:49:28 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:50:33.527 20:49:28 -- scripts/common.sh@368 -- $ return 0 00:50:33.527 20:49:28 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:50:33.527 20:49:28 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:50:33.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:33.527 --rc genhtml_branch_coverage=1 00:50:33.527 --rc genhtml_function_coverage=1 00:50:33.527 --rc genhtml_legend=1 00:50:33.527 --rc geninfo_all_blocks=1 00:50:33.527 --rc geninfo_unexecuted_blocks=1 00:50:33.527 00:50:33.527 ' 00:50:33.527 20:49:28 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:50:33.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:33.527 --rc genhtml_branch_coverage=1 00:50:33.527 --rc genhtml_function_coverage=1 00:50:33.527 --rc genhtml_legend=1 00:50:33.527 --rc geninfo_all_blocks=1 00:50:33.527 --rc geninfo_unexecuted_blocks=1 00:50:33.527 00:50:33.527 ' 00:50:33.527 20:49:28 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:50:33.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:33.527 --rc genhtml_branch_coverage=1 00:50:33.527 --rc genhtml_function_coverage=1 00:50:33.527 --rc genhtml_legend=1 00:50:33.527 --rc geninfo_all_blocks=1 00:50:33.527 --rc geninfo_unexecuted_blocks=1 00:50:33.527 00:50:33.527 ' 00:50:33.527 20:49:28 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:50:33.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:33.527 --rc genhtml_branch_coverage=1 00:50:33.527 --rc genhtml_function_coverage=1 00:50:33.527 --rc genhtml_legend=1 00:50:33.527 --rc geninfo_all_blocks=1 00:50:33.527 --rc geninfo_unexecuted_blocks=1 00:50:33.527 00:50:33.527 ' 00:50:33.528 20:49:28 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:50:33.528 20:49:28 -- scripts/common.sh@15 -- $ shopt -s extglob 00:50:33.528 20:49:28 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:50:33.528 20:49:28 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:50:33.528 20:49:28 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:50:33.528 20:49:28 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:33.528 20:49:28 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:33.528 20:49:28 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:33.528 20:49:28 -- paths/export.sh@5 -- $ export PATH 00:50:33.528 20:49:28 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:33.528 20:49:28 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:50:33.528 20:49:28 -- common/autobuild_common.sh@479 -- $ date +%s 00:50:33.528 20:49:28 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727815768.XXXXXX 00:50:33.528 20:49:28 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727815768.ghWXUu 00:50:33.528 20:49:28 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:50:33.528 20:49:28 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:50:33.528 20:49:28 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:50:33.528 20:49:28 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:50:33.528 20:49:28 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:50:33.528 20:49:28 -- common/autobuild_common.sh@495 -- $ get_config_params 00:50:33.528 20:49:28 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:50:33.528 20:49:28 -- common/autotest_common.sh@10 -- $ set +x 00:50:33.528 20:49:28 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:50:33.528 20:49:28 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:50:33.528 20:49:28 -- pm/common@17 -- $ local monitor 00:50:33.528 20:49:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:50:33.528 20:49:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:50:33.528 20:49:28 -- pm/common@25 -- $ sleep 1 00:50:33.528 20:49:28 -- pm/common@21 -- $ date +%s 00:50:33.528 20:49:28 -- pm/common@21 -- $ date +%s 00:50:33.528 20:49:28 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727815768 00:50:33.528 20:49:28 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727815768 00:50:33.528 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727815768_collect-cpu-load.pm.log 00:50:33.528 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727815768_collect-vmstat.pm.log 00:50:34.463 20:49:29 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:50:34.463 20:49:29 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:50:34.463 20:49:29 -- spdk/autopackage.sh@14 -- $ timing_finish 00:50:34.463 20:49:29 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:50:34.463 20:49:29 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:50:34.464 20:49:29 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:50:34.464 20:49:29 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:50:34.464 20:49:29 -- pm/common@29 -- $ signal_monitor_resources TERM 00:50:34.464 20:49:29 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:50:34.464 20:49:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:50:34.464 20:49:29 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:50:34.464 20:49:29 -- pm/common@44 -- $ pid=112446 00:50:34.464 20:49:29 -- pm/common@50 -- $ kill -TERM 112446 00:50:34.464 20:49:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:50:34.464 20:49:29 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:50:34.464 20:49:29 -- pm/common@44 -- $ pid=112448 00:50:34.464 20:49:29 -- pm/common@50 -- $ kill -TERM 112448 00:50:34.464 + [[ -n 5270 ]] 00:50:34.464 + sudo kill 5270 00:50:34.731 [Pipeline] } 00:50:34.747 [Pipeline] // timeout 00:50:34.752 [Pipeline] } 00:50:34.767 [Pipeline] // stage 00:50:34.773 [Pipeline] } 00:50:34.789 [Pipeline] // catchError 00:50:34.799 [Pipeline] stage 00:50:34.801 [Pipeline] { (Stop VM) 00:50:34.813 [Pipeline] sh 00:50:35.094 + vagrant halt 00:50:38.380 ==> default: Halting domain... 00:50:43.702 [Pipeline] sh 00:50:43.981 + vagrant destroy -f 00:50:47.339 ==> default: Removing domain... 00:50:47.350 [Pipeline] sh 00:50:47.627 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest_3/output 00:50:47.636 [Pipeline] } 00:50:47.651 [Pipeline] // stage 00:50:47.656 [Pipeline] } 00:50:47.669 [Pipeline] // dir 00:50:47.674 [Pipeline] } 00:50:47.685 [Pipeline] // wrap 00:50:47.691 [Pipeline] } 00:50:47.700 [Pipeline] // catchError 00:50:47.707 [Pipeline] stage 00:50:47.708 [Pipeline] { (Epilogue) 00:50:47.716 [Pipeline] sh 00:50:47.991 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:50:53.314 [Pipeline] catchError 00:50:53.316 [Pipeline] { 00:50:53.331 [Pipeline] sh 00:50:53.617 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:50:53.617 Artifacts sizes are good 00:50:53.625 [Pipeline] } 00:50:53.640 [Pipeline] // catchError 00:50:53.653 [Pipeline] archiveArtifacts 00:50:53.661 Archiving artifacts 00:50:53.825 [Pipeline] cleanWs 00:50:53.837 [WS-CLEANUP] Deleting project workspace... 00:50:53.837 [WS-CLEANUP] Deferred wipeout is used... 00:50:53.843 [WS-CLEANUP] done 00:50:53.845 [Pipeline] } 00:50:53.859 [Pipeline] // stage 00:50:53.864 [Pipeline] } 00:50:53.877 [Pipeline] // node 00:50:53.883 [Pipeline] End of Pipeline 00:50:53.920 Finished: SUCCESS